Test Report: Docker_Linux_crio_arm64 21683

                    
                      1b58c48826b6fb4d6f7297e87780eae465bc5f37:2025-10-19:41984
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.44
35 TestAddons/parallel/Registry 16.69
36 TestAddons/parallel/RegistryCreds 0.53
37 TestAddons/parallel/Ingress 145.58
38 TestAddons/parallel/InspektorGadget 6.32
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 40.56
42 TestAddons/parallel/Headlamp 3.58
43 TestAddons/parallel/CloudSpanner 5.37
44 TestAddons/parallel/LocalPath 8.45
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 5.29
91 TestFunctional/parallel/DashboardCmd 302.61
98 TestFunctional/parallel/ServiceCmdConnect 603.57
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.82
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
136 TestFunctional/parallel/ServiceCmd/Format 0.55
137 TestFunctional/parallel/ServiceCmd/URL 0.49
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
191 TestJSONOutput/pause/Command 1.99
197 TestJSONOutput/unpause/Command 2.25
282 TestPause/serial/Pause 8.62
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.57
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.52
353 TestStartStop/group/old-k8s-version/serial/Pause 6.68
361 TestStartStop/group/no-preload/serial/Pause 6.49
365 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.48
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.48
377 TestStartStop/group/embed-certs/serial/Pause 8.7
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.39
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.7
392 TestStartStop/group/newest-cni/serial/Pause 5.78
x
+
TestAddons/serial/Volcano (0.44s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable volcano --alsologtostderr -v=1: exit status 11 (436.34055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:23:51.697559   10937 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:23:51.700228   10937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:51.700248   10937 out.go:374] Setting ErrFile to fd 2...
	I1019 16:23:51.700255   10937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:23:51.700526   10937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:23:51.700832   10937 mustload.go:66] Loading cluster: addons-567517
	I1019 16:23:51.701254   10937 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:51.701274   10937 addons.go:607] checking whether the cluster is paused
	I1019 16:23:51.701389   10937 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:23:51.701405   10937 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:23:51.701867   10937 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:23:51.746848   10937 ssh_runner.go:195] Run: systemctl --version
	I1019 16:23:51.746915   10937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:23:51.769111   10937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:23:51.877100   10937 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:23:51.877224   10937 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:23:51.907316   10937 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:23:51.907335   10937 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:23:51.907340   10937 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:23:51.907344   10937 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:23:51.907347   10937 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:23:51.907351   10937 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:23:51.907360   10937 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:23:51.907364   10937 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:23:51.907367   10937 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:23:51.907374   10937 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:23:51.907377   10937 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:23:51.907380   10937 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:23:51.907383   10937 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:23:51.907386   10937 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:23:51.907390   10937 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:23:51.907394   10937 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:23:51.907397   10937 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:23:51.907401   10937 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:23:51.907404   10937 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:23:51.907407   10937 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:23:51.907413   10937 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:23:51.907416   10937 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:23:51.907419   10937 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:23:51.907422   10937 cri.go:89] found id: ""
	I1019 16:23:51.907471   10937 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:23:51.923092   10937 out.go:203] 
	W1019 16:23:51.925978   10937 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:23:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:23:51.926013   10937 out.go:285] * 
	* 
	W1019 16:23:52.037283   10937 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:23:52.040275   10937 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.707954ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008106316s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003927698s
addons_test.go:392: (dbg) Run:  kubectl --context addons-567517 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-567517 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-567517 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.014544998s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 ip
2025/10/19 16:24:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable registry --alsologtostderr -v=1: exit status 11 (273.539092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:17.927938   11538 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:17.928175   11538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:17.928189   11538 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:17.928195   11538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:17.928467   11538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:17.928793   11538 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:17.929194   11538 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:17.929215   11538 addons.go:607] checking whether the cluster is paused
	I1019 16:24:17.929352   11538 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:17.929368   11538 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:17.929902   11538 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:17.956442   11538 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:17.956508   11538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:17.975519   11538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:18.082356   11538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:18.082441   11538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:18.114447   11538 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:18.114467   11538 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:18.114473   11538 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:18.114477   11538 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:18.114481   11538 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:18.114485   11538 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:18.114488   11538 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:18.114493   11538 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:18.114496   11538 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:18.114503   11538 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:18.114507   11538 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:18.114511   11538 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:18.114516   11538 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:18.114519   11538 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:18.114522   11538 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:18.114530   11538 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:18.114564   11538 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:18.114570   11538 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:18.114573   11538 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:18.114576   11538 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:18.114581   11538 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:18.114584   11538 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:18.114588   11538 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:18.114596   11538 cri.go:89] found id: ""
	I1019 16:24:18.114649   11538 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:18.131196   11538 out.go:203] 
	W1019 16:24:18.134209   11538 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:18.134380   11538 out.go:285] * 
	* 
	W1019 16:24:18.138326   11538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:18.141408   11538 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.161581ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-567517
addons_test.go:332: (dbg) Run:  kubectl --context addons-567517 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.48913ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:25:04.396785   13479 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:25:04.396939   13479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:04.396945   13479 out.go:374] Setting ErrFile to fd 2...
	I1019 16:25:04.396949   13479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:04.397201   13479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:25:04.397551   13479 mustload.go:66] Loading cluster: addons-567517
	I1019 16:25:04.397914   13479 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:04.397926   13479 addons.go:607] checking whether the cluster is paused
	I1019 16:25:04.398032   13479 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:04.398042   13479 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:25:04.398488   13479 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:25:04.421926   13479 ssh_runner.go:195] Run: systemctl --version
	I1019 16:25:04.421982   13479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:25:04.439535   13479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:25:04.545169   13479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:25:04.545258   13479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:25:04.578655   13479 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:25:04.578719   13479 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:25:04.578737   13479 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:25:04.578756   13479 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:25:04.578773   13479 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:25:04.578805   13479 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:25:04.578831   13479 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:25:04.578849   13479 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:25:04.578866   13479 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:25:04.578891   13479 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:25:04.578925   13479 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:25:04.578948   13479 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:25:04.578965   13479 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:25:04.578982   13479 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:25:04.579000   13479 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:25:04.579035   13479 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:25:04.579068   13479 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:25:04.579088   13479 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:25:04.579106   13479 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:25:04.579125   13479 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:25:04.579153   13479 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:25:04.579174   13479 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:25:04.579192   13479 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:25:04.579209   13479 cri.go:89] found id: ""
	I1019 16:25:04.579287   13479 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:25:04.593525   13479 out.go:203] 
	W1019 16:25:04.596475   13479 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:25:04.596505   13479 out.go:285] * 
	* 
	W1019 16:25:04.600331   13479 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:25:04.603376   13479 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-567517 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-567517 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-567517 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [8d943803-2654-4bc7-a94c-e6a34cc85433] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [8d943803-2654-4bc7-a94c-e6a34cc85433] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005542031s
I1019 16:24:46.548528    4111 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.339705338s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-567517 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-567517
helpers_test.go:243: (dbg) docker inspect addons-567517:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2",
	        "Created": "2025-10-19T16:21:18.715230834Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5275,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:21:18.779663674Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/hosts",
	        "LogPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2-json.log",
	        "Name": "/addons-567517",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-567517:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-567517",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2",
	                "LowerDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-567517",
	                "Source": "/var/lib/docker/volumes/addons-567517/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-567517",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-567517",
	                "name.minikube.sigs.k8s.io": "addons-567517",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f66e53da837f7ec52a165f3cbc8b47b69a445c1cb1b94ab15cd491c6b2c2d1",
	            "SandboxKey": "/var/run/docker/netns/29f66e53da83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-567517": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:ac:b6:e1:36:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb00710760e8f44157b720b54e4e9184ba695ef1c209c7eddbcabbeafc2696cc",
	                    "EndpointID": "da1b2e3b8fc8a3076cb97b91e16a3d43c2c9fff01f3db7053a7df18716c62147",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-567517",
	                        "30d4c94890b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-567517 -n addons-567517
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-567517 logs -n 25: (1.478568117s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-893374                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-893374 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ --download-only -p binary-mirror-533416 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-533416   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p binary-mirror-533416                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-533416   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ addons  │ enable dashboard -p addons-567517                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-567517                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ start   │ -p addons-567517 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:23 UTC │
	│ addons  │ addons-567517 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-567517 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ ip      │ addons-567517 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-567517 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ ssh     │ addons-567517 ssh cat /opt/local-path-provisioner/pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-567517 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ enable headlamp -p addons-567517 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ ssh     │ addons-567517 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │                     │
	│ addons  │ addons-567517 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-567517                                                                                                                                                                                                                                                                                                                                                                                           │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-567517 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │                     │
	│ ip      │ addons-567517 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:26 UTC │ 19 Oct 25 16:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:52.711728    4866 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:52.711924    4866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:52.711951    4866 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:52.711968    4866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:52.712356    4866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:20:52.713414    4866 out.go:368] Setting JSON to false
	I1019 16:20:52.714171    4866 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":201,"bootTime":1760890652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:20:52.714270    4866 start.go:143] virtualization:  
	I1019 16:20:52.717648    4866 out.go:179] * [addons-567517] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:20:52.721598    4866 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:20:52.721672    4866 notify.go:221] Checking for updates...
	I1019 16:20:52.727553    4866 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:52.730706    4866 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:20:52.733716    4866 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:20:52.736565    4866 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:20:52.739411    4866 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:20:52.742739    4866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:52.774804    4866 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:20:52.774925    4866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:52.829739    4866 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-19 16:20:52.820362093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:52.829847    4866 docker.go:319] overlay module found
	I1019 16:20:52.832915    4866 out.go:179] * Using the docker driver based on user configuration
	I1019 16:20:52.835882    4866 start.go:309] selected driver: docker
	I1019 16:20:52.835902    4866 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:52.835915    4866 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:20:52.836629    4866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:52.890858    4866 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-19 16:20:52.88168984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:52.891026    4866 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:52.891253    4866 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:20:52.894144    4866 out.go:179] * Using Docker driver with root privileges
	I1019 16:20:52.897018    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:20:52.897082    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:52.897094    4866 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:52.897165    4866 start.go:353] cluster config:
	{Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 16:20:52.902042    4866 out.go:179] * Starting "addons-567517" primary control-plane node in "addons-567517" cluster
	I1019 16:20:52.904809    4866 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:52.907672    4866 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:52.910493    4866 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:52.910567    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:52.910608    4866 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 16:20:52.910617    4866 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:52.910724    4866 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 16:20:52.910732    4866 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:20:52.911064    4866 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json ...
	I1019 16:20:52.911082    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json: {Name:mk491f7cd4580b695ff73a32359e8a6b5d14b00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:20:52.925505    4866 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:52.925629    4866 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:52.925654    4866 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:52.925659    4866 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:52.925667    4866 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:52.925676    4866 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 16:21:11.337906    4866 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 16:21:11.337960    4866 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:21:11.337987    4866 start.go:360] acquireMachinesLock for addons-567517: {Name:mk619b65a6c60e99d51761523a9021973b2a13ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:11.338098    4866 start.go:364] duration metric: took 86.826µs to acquireMachinesLock for "addons-567517"
	I1019 16:21:11.338128    4866 start.go:93] Provisioning new machine with config: &{Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:11.338209    4866 start.go:125] createHost starting for "" (driver="docker")
	I1019 16:21:11.341563    4866 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 16:21:11.341827    4866 start.go:159] libmachine.API.Create for "addons-567517" (driver="docker")
	I1019 16:21:11.341871    4866 client.go:171] LocalClient.Create starting
	I1019 16:21:11.341991    4866 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 16:21:12.354371    4866 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 16:21:12.987114    4866 cli_runner.go:164] Run: docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 16:21:13.007585    4866 cli_runner.go:211] docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 16:21:13.007673    4866 network_create.go:284] running [docker network inspect addons-567517] to gather additional debugging logs...
	I1019 16:21:13.007710    4866 cli_runner.go:164] Run: docker network inspect addons-567517
	W1019 16:21:13.023437    4866 cli_runner.go:211] docker network inspect addons-567517 returned with exit code 1
	I1019 16:21:13.023483    4866 network_create.go:287] error running [docker network inspect addons-567517]: docker network inspect addons-567517: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-567517 not found
	I1019 16:21:13.023498    4866 network_create.go:289] output of [docker network inspect addons-567517]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-567517 not found
	
	** /stderr **
	I1019 16:21:13.023612    4866 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:13.040111    4866 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d8110}
	I1019 16:21:13.040155    4866 network_create.go:124] attempt to create docker network addons-567517 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 16:21:13.040213    4866 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-567517 addons-567517
	I1019 16:21:13.100038    4866 network_create.go:108] docker network addons-567517 192.168.49.0/24 created
	I1019 16:21:13.100070    4866 kic.go:121] calculated static IP "192.168.49.2" for the "addons-567517" container
	I1019 16:21:13.100158    4866 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 16:21:13.115523    4866 cli_runner.go:164] Run: docker volume create addons-567517 --label name.minikube.sigs.k8s.io=addons-567517 --label created_by.minikube.sigs.k8s.io=true
	I1019 16:21:13.133409    4866 oci.go:103] Successfully created a docker volume addons-567517
	I1019 16:21:13.133498    4866 cli_runner.go:164] Run: docker run --rm --name addons-567517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --entrypoint /usr/bin/test -v addons-567517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 16:21:14.178695    4866 cli_runner.go:217] Completed: docker run --rm --name addons-567517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --entrypoint /usr/bin/test -v addons-567517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (1.045160364s)
	I1019 16:21:14.178733    4866 oci.go:107] Successfully prepared a docker volume addons-567517
	I1019 16:21:14.178775    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:14.178798    4866 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 16:21:14.178867    4866 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-567517:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 16:21:18.639826    4866 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-567517:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460916175s)
	I1019 16:21:18.639853    4866 kic.go:203] duration metric: took 4.461053231s to extract preloaded images to volume ...
	W1019 16:21:18.640002    4866 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 16:21:18.640124    4866 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 16:21:18.700135    4866 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-567517 --name addons-567517 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-567517 --network addons-567517 --ip 192.168.49.2 --volume addons-567517:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 16:21:19.049460    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Running}}
	I1019 16:21:19.074109    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.102480    4866 cli_runner.go:164] Run: docker exec addons-567517 stat /var/lib/dpkg/alternatives/iptables
	I1019 16:21:19.153630    4866 oci.go:144] the created container "addons-567517" has a running status.
	I1019 16:21:19.153659    4866 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa...
	I1019 16:21:19.641201    4866 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 16:21:19.670728    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.702784    4866 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 16:21:19.702804    4866 kic_runner.go:114] Args: [docker exec --privileged addons-567517 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 16:21:19.758209    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.780038    4866 machine.go:94] provisionDockerMachine start ...
	I1019 16:21:19.780147    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:19.803281    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:19.803635    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:19.803650    4866 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:21:19.986799    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-567517
	
	I1019 16:21:19.986870    4866 ubuntu.go:182] provisioning hostname "addons-567517"
	I1019 16:21:19.986964    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.023499    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.023814    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.023827    4866 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-567517 && echo "addons-567517" | sudo tee /etc/hostname
	I1019 16:21:20.195445    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-567517
	
	I1019 16:21:20.195526    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.214456    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.214846    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.214870    4866 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-567517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-567517/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-567517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:21:20.374557    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:21:20.374587    4866 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 16:21:20.374619    4866 ubuntu.go:190] setting up certificates
	I1019 16:21:20.374636    4866 provision.go:84] configureAuth start
	I1019 16:21:20.374703    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:20.391399    4866 provision.go:143] copyHostCerts
	I1019 16:21:20.391509    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 16:21:20.391648    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 16:21:20.391717    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 16:21:20.391776    4866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.addons-567517 san=[127.0.0.1 192.168.49.2 addons-567517 localhost minikube]
	I1019 16:21:20.606253    4866 provision.go:177] copyRemoteCerts
	I1019 16:21:20.606317    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:21:20.606356    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.623756    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:20.726312    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 16:21:20.743663    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:21:20.763117    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 16:21:20.780382    4866 provision.go:87] duration metric: took 405.732372ms to configureAuth
	I1019 16:21:20.780407    4866 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:21:20.780587    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:20.780696    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.798834    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.799137    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.799159    4866 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:21:21.049679    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:21:21.049704    4866 machine.go:97] duration metric: took 1.269643093s to provisionDockerMachine
	I1019 16:21:21.049713    4866 client.go:174] duration metric: took 9.707833688s to LocalClient.Create
	I1019 16:21:21.049726    4866 start.go:167] duration metric: took 9.7079012s to libmachine.API.Create "addons-567517"
	I1019 16:21:21.049733    4866 start.go:293] postStartSetup for "addons-567517" (driver="docker")
	I1019 16:21:21.049743    4866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:21:21.049811    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:21:21.049870    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.067757    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.170483    4866 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:21:21.173710    4866 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:21:21.173775    4866 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:21:21.173793    4866 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 16:21:21.173875    4866 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 16:21:21.173908    4866 start.go:296] duration metric: took 124.169347ms for postStartSetup
	I1019 16:21:21.174258    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:21.191175    4866 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json ...
	I1019 16:21:21.191470    4866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:21:21.191519    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.209043    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.311591    4866 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:21:21.316115    4866 start.go:128] duration metric: took 9.977890882s to createHost
	I1019 16:21:21.316141    4866 start.go:83] releasing machines lock for "addons-567517", held for 9.978030089s
	I1019 16:21:21.316208    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:21.333298    4866 ssh_runner.go:195] Run: cat /version.json
	I1019 16:21:21.333352    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.333596    4866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:21:21.333660    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.360666    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.361439    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.548638    4866 ssh_runner.go:195] Run: systemctl --version
	I1019 16:21:21.554778    4866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:21:21.589430    4866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:21:21.594291    4866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:21:21.594357    4866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:21:21.621793    4866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 16:21:21.621866    4866 start.go:496] detecting cgroup driver to use...
	I1019 16:21:21.621914    4866 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 16:21:21.621996    4866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:21:21.638627    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:21:21.650627    4866 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:21:21.650687    4866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:21:21.668231    4866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:21:21.686965    4866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:21:21.799409    4866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:21:21.923747    4866 docker.go:234] disabling docker service ...
	I1019 16:21:21.923813    4866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:21:21.944368    4866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:21:21.957725    4866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:21:22.071050    4866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:21:22.196039    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:21:22.210107    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:21:22.225283    4866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:21:22.225390    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.235383    4866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 16:21:22.235517    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.245098    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.253852    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.262416    4866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:21:22.270372    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.278989    4866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.291894    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.300472    4866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:21:22.307633    4866 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 16:21:22.307723    4866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 16:21:22.321272    4866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:21:22.328669    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:22.438967    4866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:21:22.563633    4866 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:21:22.563723    4866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:21:22.567331    4866 start.go:564] Will wait 60s for crictl version
	I1019 16:21:22.567387    4866 ssh_runner.go:195] Run: which crictl
	I1019 16:21:22.570646    4866 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:21:22.598732    4866 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 16:21:22.598906    4866 ssh_runner.go:195] Run: crio --version
	I1019 16:21:22.626264    4866 ssh_runner.go:195] Run: crio --version
	I1019 16:21:22.656369    4866 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 16:21:22.659355    4866 cli_runner.go:164] Run: docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:22.675413    4866 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 16:21:22.679130    4866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:22.688908    4866 kubeadm.go:884] updating cluster {Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:21:22.689032    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:22.689093    4866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:22.724750    4866 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:22.724773    4866 crio.go:433] Images already preloaded, skipping extraction
	I1019 16:21:22.724826    4866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:22.750363    4866 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:22.750385    4866 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:21:22.750393    4866 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 16:21:22.750480    4866 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-567517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:21:22.750589    4866 ssh_runner.go:195] Run: crio config
	I1019 16:21:22.812291    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:21:22.812318    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:22.812338    4866 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:21:22.812360    4866 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-567517 NodeName:addons-567517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:21:22.812489    4866 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-567517"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:21:22.812561    4866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:21:22.820127    4866 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:21:22.820189    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:21:22.827098    4866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 16:21:22.839347    4866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:21:22.851098    4866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 16:21:22.862912    4866 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:21:22.866654    4866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:22.875880    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:22.980104    4866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:22.994860    4866 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517 for IP: 192.168.49.2
	I1019 16:21:22.994884    4866 certs.go:195] generating shared ca certs ...
	I1019 16:21:22.994900    4866 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:22.995016    4866 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 16:21:23.865953    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt ...
	I1019 16:21:23.865982    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt: {Name:mkf27cf70815f99453893555ee6791fe81ad17cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:23.866162    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key ...
	I1019 16:21:23.866175    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key: {Name:mk664244a6bffdbc499971b768334808c7f88ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:23.866249    4866 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 16:21:24.754684    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt ...
	I1019 16:21:24.754715    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt: {Name:mke2b0b8c1c015a719d5f79ce7a9bd1893fcb19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:24.754893    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key ...
	I1019 16:21:24.754908    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key: {Name:mk93e4874429d278bc7d76ec409b752a3dd045e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:24.754982    4866 certs.go:257] generating profile certs ...
	I1019 16:21:24.755059    4866 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key
	I1019 16:21:24.755077    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt with IP's: []
	I1019 16:21:25.391736    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt ...
	I1019 16:21:25.391766    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: {Name:mkbe082a86ad49bca82b3c1e87468b596f96c8d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.391943    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key ...
	I1019 16:21:25.391954    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key: {Name:mkea240b97b8e09867828145871510e812e090d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.392035    4866 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e
	I1019 16:21:25.392055    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 16:21:25.611487    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e ...
	I1019 16:21:25.611516    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e: {Name:mke2387f21657fa72494aa52dfd2d980b8c2b71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.611683    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e ...
	I1019 16:21:25.611696    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e: {Name:mk04c718a5bc5921681f73c6a363ba3dcda70529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.611777    4866 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt
	I1019 16:21:25.611858    4866 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key
	I1019 16:21:25.611912    4866 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key
	I1019 16:21:25.611931    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt with IP's: []
	I1019 16:21:25.776507    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt ...
	I1019 16:21:25.776534    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt: {Name:mke38ad0e401d7c6e6c8dbba919f6b59c860a004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.776695    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key ...
	I1019 16:21:25.776707    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key: {Name:mkfa49a36b011783748654ec04e4f45b988d49fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.776893    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 16:21:25.776936    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:21:25.776963    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:21:25.776989    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 16:21:25.777549    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:21:25.795473    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:21:25.812813    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:21:25.830128    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 16:21:25.848798    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 16:21:25.866369    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 16:21:25.884502    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:21:25.901568    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 16:21:25.918882    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:21:25.936138    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:21:25.948850    4866 ssh_runner.go:195] Run: openssl version
	I1019 16:21:25.955089    4866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:21:25.963437    4866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:25.967028    4866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:25.967132    4866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:26.008214    4866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:21:26.016690    4866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:21:26.020539    4866 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 16:21:26.020588    4866 kubeadm.go:401] StartCluster: {Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:26.020684    4866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:21:26.020746    4866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:21:26.052367    4866 cri.go:89] found id: ""
	I1019 16:21:26.052515    4866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:21:26.061344    4866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:21:26.069748    4866 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 16:21:26.069866    4866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:21:26.079334    4866 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 16:21:26.079389    4866 kubeadm.go:158] found existing configuration files:
	
	I1019 16:21:26.079481    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 16:21:26.090069    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 16:21:26.090212    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 16:21:26.101051    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 16:21:26.108714    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 16:21:26.108774    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:21:26.115837    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 16:21:26.123431    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 16:21:26.123502    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:21:26.130768    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 16:21:26.137993    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 16:21:26.138053    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:21:26.145003    4866 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 16:21:26.181734    4866 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 16:21:26.182028    4866 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 16:21:26.209040    4866 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 16:21:26.209119    4866 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 16:21:26.209161    4866 kubeadm.go:319] OS: Linux
	I1019 16:21:26.209213    4866 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 16:21:26.209266    4866 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 16:21:26.209319    4866 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 16:21:26.209373    4866 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 16:21:26.209428    4866 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 16:21:26.209491    4866 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 16:21:26.209542    4866 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 16:21:26.209596    4866 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 16:21:26.209649    4866 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 16:21:26.274752    4866 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 16:21:26.274871    4866 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 16:21:26.274989    4866 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 16:21:26.287534    4866 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 16:21:26.294374    4866 out.go:252]   - Generating certificates and keys ...
	I1019 16:21:26.294486    4866 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 16:21:26.294594    4866 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 16:21:27.377739    4866 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 16:21:27.968860    4866 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 16:21:28.075981    4866 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 16:21:29.363354    4866 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 16:21:29.803554    4866 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 16:21:29.803847    4866 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-567517 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:30.125705    4866 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 16:21:30.126093    4866 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-567517 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:30.501437    4866 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 16:21:31.057534    4866 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 16:21:31.391564    4866 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 16:21:31.391870    4866 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 16:21:31.853073    4866 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 16:21:32.261384    4866 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 16:21:32.475631    4866 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 16:21:33.059789    4866 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 16:21:34.105422    4866 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 16:21:34.105967    4866 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 16:21:34.110555    4866 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 16:21:34.113953    4866 out.go:252]   - Booting up control plane ...
	I1019 16:21:34.114087    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 16:21:34.114181    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 16:21:34.114710    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 16:21:34.131037    4866 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 16:21:34.131151    4866 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 16:21:34.138891    4866 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 16:21:34.139225    4866 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 16:21:34.139273    4866 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 16:21:34.268691    4866 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 16:21:34.268815    4866 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 16:21:35.269273    4866 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000860818s
	I1019 16:21:35.272968    4866 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 16:21:35.273080    4866 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 16:21:35.273198    4866 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 16:21:35.273314    4866 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 16:21:39.129235    4866 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.856239003s
	I1019 16:21:39.367458    4866 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.093610571s
	I1019 16:21:40.774353    4866 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501321265s
	I1019 16:21:40.797164    4866 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 16:21:40.809056    4866 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 16:21:40.824036    4866 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 16:21:40.824266    4866 kubeadm.go:319] [mark-control-plane] Marking the node addons-567517 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 16:21:40.836358    4866 kubeadm.go:319] [bootstrap-token] Using token: no6kd7.it7lncyyywpjgtmi
	I1019 16:21:40.839580    4866 out.go:252]   - Configuring RBAC rules ...
	I1019 16:21:40.839713    4866 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 16:21:40.845685    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 16:21:40.853824    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 16:21:40.857559    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 16:21:40.861614    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 16:21:40.865727    4866 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 16:21:41.182992    4866 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 16:21:41.623722    4866 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 16:21:42.182797    4866 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 16:21:42.184132    4866 kubeadm.go:319] 
	I1019 16:21:42.184231    4866 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 16:21:42.184239    4866 kubeadm.go:319] 
	I1019 16:21:42.184320    4866 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 16:21:42.184352    4866 kubeadm.go:319] 
	I1019 16:21:42.184384    4866 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 16:21:42.184449    4866 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 16:21:42.184512    4866 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 16:21:42.184523    4866 kubeadm.go:319] 
	I1019 16:21:42.184582    4866 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 16:21:42.184591    4866 kubeadm.go:319] 
	I1019 16:21:42.184642    4866 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 16:21:42.184651    4866 kubeadm.go:319] 
	I1019 16:21:42.184706    4866 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 16:21:42.184789    4866 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 16:21:42.184865    4866 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 16:21:42.184874    4866 kubeadm.go:319] 
	I1019 16:21:42.184963    4866 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 16:21:42.185048    4866 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 16:21:42.185057    4866 kubeadm.go:319] 
	I1019 16:21:42.185180    4866 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token no6kd7.it7lncyyywpjgtmi \
	I1019 16:21:42.185294    4866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 16:21:42.185322    4866 kubeadm.go:319] 	--control-plane 
	I1019 16:21:42.185331    4866 kubeadm.go:319] 
	I1019 16:21:42.185479    4866 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 16:21:42.185490    4866 kubeadm.go:319] 
	I1019 16:21:42.185577    4866 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token no6kd7.it7lncyyywpjgtmi \
	I1019 16:21:42.185756    4866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 16:21:42.189426    4866 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 16:21:42.189681    4866 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 16:21:42.189799    4866 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 16:21:42.189821    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:21:42.189831    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:42.193091    4866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 16:21:42.196438    4866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 16:21:42.201519    4866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 16:21:42.201539    4866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 16:21:42.222936    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 16:21:42.517168    4866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:21:42.517261    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:42.517332    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-567517 minikube.k8s.io/updated_at=2025_10_19T16_21_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=addons-567517 minikube.k8s.io/primary=true
	I1019 16:21:42.684093    4866 ops.go:34] apiserver oom_adj: -16
	I1019 16:21:42.684224    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:43.184260    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:43.684906    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:44.184764    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:44.684274    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:45.185005    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:45.684200    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:46.184434    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:46.685120    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:47.184567    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:47.315880    4866 kubeadm.go:1114] duration metric: took 4.798678793s to wait for elevateKubeSystemPrivileges
	I1019 16:21:47.315905    4866 kubeadm.go:403] duration metric: took 21.295318862s to StartCluster
	I1019 16:21:47.315921    4866 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:47.316028    4866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:21:47.316403    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:47.316578    4866 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:47.316758    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 16:21:47.316924    4866 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 16:21:47.317003    4866 addons.go:70] Setting yakd=true in profile "addons-567517"
	I1019 16:21:47.317017    4866 addons.go:239] Setting addon yakd=true in "addons-567517"
	I1019 16:21:47.317038    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.317551    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.317926    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:47.318144    4866 addons.go:70] Setting inspektor-gadget=true in profile "addons-567517"
	I1019 16:21:47.318177    4866 addons.go:239] Setting addon inspektor-gadget=true in "addons-567517"
	I1019 16:21:47.318249    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.318353    4866 addons.go:70] Setting metrics-server=true in profile "addons-567517"
	I1019 16:21:47.318376    4866 addons.go:239] Setting addon metrics-server=true in "addons-567517"
	I1019 16:21:47.318414    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.318820    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.318852    4866 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-567517"
	I1019 16:21:47.318874    4866 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-567517"
	I1019 16:21:47.318891    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.319267    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.325242    4866 addons.go:70] Setting registry=true in profile "addons-567517"
	I1019 16:21:47.325282    4866 addons.go:239] Setting addon registry=true in "addons-567517"
	I1019 16:21:47.325313    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.325336    4866 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-567517"
	I1019 16:21:47.325356    4866 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-567517"
	I1019 16:21:47.325380    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.325764    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.325799    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.337136    4866 addons.go:70] Setting registry-creds=true in profile "addons-567517"
	I1019 16:21:47.337224    4866 addons.go:239] Setting addon registry-creds=true in "addons-567517"
	I1019 16:21:47.337273    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.337835    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.338665    4866 addons.go:70] Setting cloud-spanner=true in profile "addons-567517"
	I1019 16:21:47.338729    4866 addons.go:239] Setting addon cloud-spanner=true in "addons-567517"
	I1019 16:21:47.338957    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.339449    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.355089    4866 addons.go:70] Setting storage-provisioner=true in profile "addons-567517"
	I1019 16:21:47.355126    4866 addons.go:239] Setting addon storage-provisioner=true in "addons-567517"
	I1019 16:21:47.355167    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.355743    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.357290    4866 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-567517"
	I1019 16:21:47.357394    4866 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-567517"
	I1019 16:21:47.357447    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.357924    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.368140    4866 addons.go:70] Setting default-storageclass=true in profile "addons-567517"
	I1019 16:21:47.368219    4866 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-567517"
	I1019 16:21:47.368313    4866 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-567517"
	I1019 16:21:47.368375    4866 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-567517"
	I1019 16:21:47.368611    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.369823    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.382640    4866 addons.go:70] Setting volcano=true in profile "addons-567517"
	I1019 16:21:47.382729    4866 addons.go:239] Setting addon volcano=true in "addons-567517"
	I1019 16:21:47.382777    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.383864    4866 addons.go:70] Setting gcp-auth=true in profile "addons-567517"
	I1019 16:21:47.383931    4866 mustload.go:66] Loading cluster: addons-567517
	I1019 16:21:47.384158    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:47.384438    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.390135    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.397840    4866 addons.go:70] Setting volumesnapshots=true in profile "addons-567517"
	I1019 16:21:47.397964    4866 addons.go:239] Setting addon volumesnapshots=true in "addons-567517"
	I1019 16:21:47.398082    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.402107    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.403046    4866 addons.go:70] Setting ingress=true in profile "addons-567517"
	I1019 16:21:47.403098    4866 addons.go:239] Setting addon ingress=true in "addons-567517"
	I1019 16:21:47.403136    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.403685    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.423271    4866 out.go:179] * Verifying Kubernetes components...
	I1019 16:21:47.427257    4866 addons.go:70] Setting ingress-dns=true in profile "addons-567517"
	I1019 16:21:47.427313    4866 addons.go:239] Setting addon ingress-dns=true in "addons-567517"
	I1019 16:21:47.427359    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.427827    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.428301    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:47.318832    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.551599    4866 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 16:21:47.551748    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 16:21:47.556566    4866 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:47.556587    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 16:21:47.556653    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.560522    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 16:21:47.560555    4866 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 16:21:47.560629    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.573880    4866 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 16:21:47.575142    4866 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 16:21:47.579202    4866 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 16:21:47.579232    4866 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 16:21:47.579302    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.583584    4866 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:47.583616    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 16:21:47.583683    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.607328    4866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:21:47.611008    4866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:47.611040    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:21:47.611104    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.625819    4866 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 16:21:47.629211    4866 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 16:21:47.652125    4866 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-567517"
	I1019 16:21:47.652165    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.652573    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.661537    4866 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 16:21:47.661731    4866 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 16:21:47.681543    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 16:21:47.681567    4866 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 16:21:47.681627    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.684176    4866 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:47.684200    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 16:21:47.684265    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.704304    4866 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 16:21:47.711018    4866 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 16:21:47.711046    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 16:21:47.711124    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.712587    4866 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:47.712649    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 16:21:47.712729    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.737453    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 16:21:47.737829    4866 addons.go:239] Setting addon default-storageclass=true in "addons-567517"
	I1019 16:21:47.737892    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.738323    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.753306    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	W1019 16:21:47.754963    4866 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 16:21:47.757863    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.766793    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 16:21:47.766979    4866 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 16:21:47.784782    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 16:21:47.787857    4866 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 16:21:47.788805    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 16:21:47.788821    4866 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 16:21:47.788907    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.794477    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 16:21:47.805592    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:47.809171    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:47.813775    4866 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:47.813842    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 16:21:47.813934    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.839226    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 16:21:47.842355    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 16:21:47.845651    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 16:21:47.848959    4866 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:47.848982    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 16:21:47.849054    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.849283    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.851659    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.852851    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.853270    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.856736    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 16:21:47.862624    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 16:21:47.862653    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 16:21:47.862722    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.872413    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.877828    4866 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 16:21:47.883154    4866 out.go:179]   - Using image docker.io/busybox:stable
	I1019 16:21:47.890839    4866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:47.890863    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 16:21:47.890936    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.948493    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.963899    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.970924    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 16:21:47.975972    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.984458    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.995171    4866 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:47.995194    4866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:21:47.995256    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:48.012684    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.037059    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.050147    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.051811    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	W1019 16:21:48.057616    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.057655    4866 retry.go:31] will retry after 251.674837ms: ssh: handshake failed: EOF
	I1019 16:21:48.065985    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.074869    4866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:48.075061    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	W1019 16:21:48.078870    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.078900    4866 retry.go:31] will retry after 220.466218ms: ssh: handshake failed: EOF
	W1019 16:21:48.304519    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.304584    4866 retry.go:31] will retry after 465.685346ms: ssh: handshake failed: EOF
	I1019 16:21:48.477083    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 16:21:48.477140    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 16:21:48.607319    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 16:21:48.607392    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 16:21:48.631410    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:48.662009    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 16:21:48.662031    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 16:21:48.740843    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 16:21:48.740912    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 16:21:48.780480    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:48.791499    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:48.812269    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:48.813133    4866 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:48.813152    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 16:21:48.816676    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:48.820018    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:48.823711    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 16:21:48.823778    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 16:21:48.874514    4866 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 16:21:48.874655    4866 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 16:21:48.877569    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:48.884886    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 16:21:48.884955    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 16:21:48.930284    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:48.944738    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:48.983972    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 16:21:48.984042    4866 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 16:21:49.011075    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 16:21:49.011144    4866 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 16:21:49.065310    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 16:21:49.065384    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 16:21:49.099948    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 16:21:49.099973    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 16:21:49.127956    4866 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:49.127974    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 16:21:49.133189    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:49.133215    4866 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 16:21:49.200888    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 16:21:49.200910    4866 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 16:21:49.201877    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 16:21:49.201896    4866 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 16:21:49.260343    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:49.269803    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:49.299328    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:49.347431    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 16:21:49.347457    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 16:21:49.349041    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 16:21:49.349062    4866 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 16:21:49.356416    4866 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:49.356442    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 16:21:49.567747    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 16:21:49.567770    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 16:21:49.615093    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:49.632289    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:49.632313    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 16:21:49.690632    4866 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.719676275s)
	I1019 16:21:49.690662    4866 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 16:21:49.690715    4866 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.615824921s)
	I1019 16:21:49.691445    4866 node_ready.go:35] waiting up to 6m0s for node "addons-567517" to be "Ready" ...
	I1019 16:21:49.882127    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 16:21:49.882201    4866 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 16:21:49.938508    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:50.144395    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 16:21:50.144467    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 16:21:50.198104    4866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-567517" context rescaled to 1 replicas
	I1019 16:21:50.364570    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 16:21:50.364590    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 16:21:50.647356    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:50.647376    4866 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 16:21:50.952462    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1019 16:21:51.708476    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:52.182625    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.551183299s)
	I1019 16:21:52.782444    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.001932415s)
	I1019 16:21:52.782677    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.991147735s)
	I1019 16:21:52.782711    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.970384021s)
	I1019 16:21:52.782771    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.966029684s)
	I1019 16:21:52.782805    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.962731813s)
	I1019 16:21:53.593344    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.715691722s)
	I1019 16:21:53.593376    4866 addons.go:480] Verifying addon ingress=true in "addons-567517"
	I1019 16:21:53.593557    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.663206816s)
	I1019 16:21:53.593639    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.648829253s)
	W1019 16:21:53.593657    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:53.593678    4866 retry.go:31] will retry after 227.953175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:53.593739    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.333371799s)
	I1019 16:21:53.593751    4866 addons.go:480] Verifying addon metrics-server=true in "addons-567517"
	I1019 16:21:53.593772    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.323948618s)
	I1019 16:21:53.593960    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.294604596s)
	I1019 16:21:53.593990    4866 addons.go:480] Verifying addon registry=true in "addons-567517"
	I1019 16:21:53.594392    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.979268904s)
	W1019 16:21:53.594429    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:53.594443    4866 retry.go:31] will retry after 328.014494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:53.594482    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.655885354s)
	I1019 16:21:53.596789    4866 out.go:179] * Verifying ingress addon...
	I1019 16:21:53.598797    4866 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-567517 service yakd-dashboard -n yakd-dashboard
	
	I1019 16:21:53.598854    4866 out.go:179] * Verifying registry addon...
	I1019 16:21:53.601503    4866 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 16:21:53.603429    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 16:21:53.631603    4866 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:21:53.631624    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.631835    4866 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 16:21:53.631855    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.822220    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:53.923427    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:54.114174    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.161666726s)
	I1019 16:21:54.114257    4866 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-567517"
	I1019 16:21:54.116814    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.117369    4866 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 16:21:54.121057    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 16:21:54.122959    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.128164    4866 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:21:54.128234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:54.195873    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:54.607049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.607526    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.706256    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.915226    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.092965325s)
	W1019 16:21:54.915264    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:54.915319    4866 retry.go:31] will retry after 464.844418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:55.106129    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.107031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.125620    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:55.366338    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 16:21:55.366445    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:55.380717    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:55.384285    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:55.516093    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 16:21:55.534672    4866 addons.go:239] Setting addon gcp-auth=true in "addons-567517"
	I1019 16:21:55.534717    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:55.535183    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:55.561615    4866 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 16:21:55.561665    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:55.588373    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:55.607155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.607372    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.624276    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.105460    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.106923    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.124820    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.606469    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.615509    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:56.694898    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:56.706717    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.827476    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.90399759s)
	I1019 16:21:56.827564    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.446825626s)
	W1019 16:21:56.827590    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:56.827610    4866 retry.go:31] will retry after 389.198287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:56.827647    4866 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.266013745s)
	I1019 16:21:56.830716    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:56.833653    4866 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 16:21:56.836489    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 16:21:56.836518    4866 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 16:21:56.850442    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 16:21:56.850464    4866 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 16:21:56.865017    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:56.865040    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 16:21:56.877879    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:57.106826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.107312    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.125044    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.217786    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:57.387306    4866 addons.go:480] Verifying addon gcp-auth=true in "addons-567517"
	I1019 16:21:57.390607    4866 out.go:179] * Verifying gcp-auth addon...
	I1019 16:21:57.394367    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 16:21:57.408528    4866 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 16:21:57.408602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.610341    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.611018    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.624368    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.898249    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.104419    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.106752    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.124634    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:58.150158    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:58.150230    4866 retry.go:31] will retry after 1.068598811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:58.397740    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.605297    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.608174    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.624156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:58.695229    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:58.897269    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.105150    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.106491    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.124400    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.219628    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:59.397852    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.605994    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.606752    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.625075    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.897559    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.201298    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.201526    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.201817    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.238988    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.019299907s)
	W1019 16:22:00.239027    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:00.239061    4866 retry.go:31] will retry after 1.378380059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:00.400895    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.604451    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.606844    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.625234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.898132    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.105812    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.107015    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.124913    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:01.194926    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:01.398134    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.607235    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.615825    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.618080    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:01.627203    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.898358    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.107769    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.108353    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.124997    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.398170    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:02.472961    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.472990    4866 retry.go:31] will retry after 1.262803844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.604998    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.607120    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.625134    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.898205    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.104606    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.107178    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.124944    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.397521    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.605005    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.608885    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.624991    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:03.694847    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:03.735944    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:03.898451    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.106239    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.107907    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.124979    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.398458    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:04.552266    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:04.552349    4866 retry.go:31] will retry after 1.842388176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:04.606177    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.606344    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.637517    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.897422    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.105598    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.105977    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.124759    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.397826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.604894    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.607115    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.625141    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:05.695045    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:05.897849    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.106015    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.108445    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.124143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.395444    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:06.398261    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.604740    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.607409    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.625525    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.897849    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.106885    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.107441    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.124862    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:07.179504    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:07.179569    4866 retry.go:31] will retry after 5.462748642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:07.397633    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.606478    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.606185    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.624375    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.897347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.105682    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.106265    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.124138    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:08.194936    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:08.398014    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.605538    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.606978    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.624972    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.900602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.105313    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.107391    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.124094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.398067    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.605908    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.606095    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.624790    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.897136    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.105306    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.124507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:10.195200    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:10.397569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.604696    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.606921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.624986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:10.898013    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.106604    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.108142    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.124217    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.397905    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.605402    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.606848    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.624888    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.897534    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.106012    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.106942    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.124478    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.397470    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.605959    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.606405    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.624629    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.642784    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 16:22:12.695103    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:12.897118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.106780    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.107181    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.124465    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.397377    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:13.435214    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:13.435247    4866 retry.go:31] will retry after 7.252097001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:13.605583    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.606758    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.624569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.898328    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.106454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.106680    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.124162    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.397611    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.604629    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.607649    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.624736    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.897147    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.106308    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.106994    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.125037    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:15.194887    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:15.397585    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.605223    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.607407    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.624433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.897602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.105103    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.107986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.125573    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.397127    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.605638    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.606199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.627214    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.897239    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.105256    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.106463    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.124228    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:17.195048    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:17.398100    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.605288    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.615031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.623994    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.897768    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.105538    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.107458    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.124553    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.397281    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.605005    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.606105    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.624909    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.897579    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.104420    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.106369    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.124570    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.398049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.606044    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.606286    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.624777    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:19.694921    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:19.898166    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.105703    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.107838    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.124747    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.397767    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.605121    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.608841    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.624557    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.687669    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:20.898658    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.105557    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.107855    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.124192    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.398578    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:21.471563    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:21.471593    4866 retry.go:31] will retry after 7.928437037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:21.606038    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.607124    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.624854    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.897497    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.104532    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.106306    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.124046    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:22.194883    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:22.397989    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.606298    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.607128    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.625080    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.897556    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.104919    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.107230    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.124351    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.397773    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.606144    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.606554    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.624347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.898368    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.105165    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.106452    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.124490    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.398305    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.605932    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.605992    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.624716    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:24.694597    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:24.897566    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.104859    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.106823    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.124995    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.397089    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.607838    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.608073    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.624603    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.897361    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.105655    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.106199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.123885    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.398252    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.606325    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.607104    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.624986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:26.694684    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:26.897945    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.106427    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.107658    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.124745    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.398113    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.606322    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.606438    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.624151    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.897601    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.105030    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.106911    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.124727    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.397273    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.624913    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.631039    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.690724    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.725489    4866 node_ready.go:49] node "addons-567517" is "Ready"
	I1019 16:22:28.725572    4866 node_ready.go:38] duration metric: took 39.034094721s for node "addons-567517" to be "Ready" ...
	I1019 16:22:28.725600    4866 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:22:28.725686    4866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:22:28.751172    4866 api_server.go:72] duration metric: took 41.434565486s to wait for apiserver process to appear ...
	I1019 16:22:28.751244    4866 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:22:28.751276    4866 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 16:22:28.768938    4866 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 16:22:28.771626    4866 api_server.go:141] control plane version: v1.34.1
	I1019 16:22:28.771701    4866 api_server.go:131] duration metric: took 20.436968ms to wait for apiserver health ...
	I1019 16:22:28.771724    4866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:22:28.799796    4866 system_pods.go:59] 19 kube-system pods found
	I1019 16:22:28.799872    4866 system_pods.go:61] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending
	I1019 16:22:28.799893    4866 system_pods.go:61] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:28.799913    4866 system_pods.go:61] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:28.799948    4866 system_pods.go:61] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:28.799973    4866 system_pods.go:61] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:28.799992    4866 system_pods.go:61] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:28.800013    4866 system_pods.go:61] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:28.800032    4866 system_pods.go:61] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:28.800060    4866 system_pods.go:61] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:28.800083    4866 system_pods.go:61] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:28.800102    4866 system_pods.go:61] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:28.800126    4866 system_pods.go:61] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:28.800157    4866 system_pods.go:61] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:28.800182    4866 system_pods.go:61] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:28.800201    4866 system_pods.go:61] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending
	I1019 16:22:28.800221    4866 system_pods.go:61] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:28.800239    4866 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending
	I1019 16:22:28.800266    4866 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:28.800291    4866 system_pods.go:61] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:28.800312    4866 system_pods.go:74] duration metric: took 28.56984ms to wait for pod list to return data ...
	I1019 16:22:28.800334    4866 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:22:28.809955    4866 default_sa.go:45] found service account: "default"
	I1019 16:22:28.810029    4866 default_sa.go:55] duration metric: took 9.674537ms for default service account to be created ...
	I1019 16:22:28.810052    4866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:22:28.817339    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:28.817419    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending
	I1019 16:22:28.817439    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:28.817459    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:28.817498    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:28.817521    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:28.817541    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:28.817577    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:28.817599    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:28.817617    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:28.817638    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:28.817675    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:28.817698    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:28.817716    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:28.817752    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:28.817771    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending
	I1019 16:22:28.817790    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:28.817821    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending
	I1019 16:22:28.817843    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:28.817864    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:28.817909    4866 retry.go:31] will retry after 288.129516ms: missing components: kube-dns
	I1019 16:22:28.938380    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.129781    4866 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:22:29.129805    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.135996    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.136034    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.136041    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:29.136046    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:29.136050    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:29.136053    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.136058    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.136063    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.136068    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.136073    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:29.136077    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.136081    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.136087    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.136100    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:29.136109    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.136120    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.136125    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:29.136132    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.136141    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:29.136145    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:29.136159    4866 retry.go:31] will retry after 324.4012ms: missing components: kube-dns
	I1019 16:22:29.136681    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.141131    4866 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:22:29.141155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.400981    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:29.419371    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.470749    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.470788    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.470797    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:29.470807    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:29.470815    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:29.470821    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.470827    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.470832    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.470842    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.470851    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:29.470861    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.470866    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.470872    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.470885    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:29.470894    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.470909    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.470917    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:29.470929    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.470936    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.470947    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:29.470962    4866 retry.go:31] will retry after 439.223247ms: missing components: kube-dns
	I1019 16:22:29.606945    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.607532    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.624681    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.898366    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.935080    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.935123    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.935131    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:29.935140    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:29.935146    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:29.935156    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.935162    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.935173    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.935178    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.935185    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:29.935193    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.935198    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.935204    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.935215    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:29.935221    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.935228    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.935234    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:29.935242    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.935251    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.935261    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:29.935276    4866 retry.go:31] will retry after 551.509302ms: missing components: kube-dns
	I1019 16:22:30.109580    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.109716    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.127215    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.397785    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.500415    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:30.500454    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:30.500463    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:30.500473    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:30.500480    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:30.500485    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:30.500490    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:30.500495    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:30.500499    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:30.500507    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:30.500524    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:30.500534    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:30.500540    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:30.500547    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:30.500557    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:30.500564    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:30.500576    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:30.500582    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:30.500589    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:30.500593    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:30.500608    4866 retry.go:31] will retry after 537.006592ms: missing components: kube-dns
	I1019 16:22:30.611717    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.611932    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.625143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.851086    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450065736s)
	W1019 16:22:30.851121    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:30.851160    4866 retry.go:31] will retry after 10.616384705s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:30.898291    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.044715    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:31.044756    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:31.044765    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:31.044772    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:31.044781    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:31.044786    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:31.044791    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:31.044797    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:31.044805    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:31.044815    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:31.044825    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:31.044831    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:31.044837    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:31.044850    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:31.044857    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:31.044863    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:31.044872    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:31.044879    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.044891    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.044898    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:31.044914    4866 retry.go:31] will retry after 858.848711ms: missing components: kube-dns
	I1019 16:22:31.146488    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.146678    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.146859    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.398641    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.607398    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.608158    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.626698    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.898121    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.908957    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:31.908992    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:31.909002    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:31.909011    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:31.909018    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:31.909027    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:31.909032    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:31.909036    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:31.909040    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:31.909052    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:31.909056    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:31.909062    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:31.909073    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:31.909079    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:31.909089    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:31.909095    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:31.909105    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:31.909111    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.909121    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.909132    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:31.909140    4866 system_pods.go:126] duration metric: took 3.099070958s to wait for k8s-apps to be running ...
	I1019 16:22:31.909157    4866 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:22:31.909212    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:22:31.924775    4866 system_svc.go:56] duration metric: took 15.61464ms WaitForService to wait for kubelet
	I1019 16:22:31.924809    4866 kubeadm.go:587] duration metric: took 44.608202487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:22:31.924827    4866 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:22:31.927708    4866 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 16:22:31.927740    4866 node_conditions.go:123] node cpu capacity is 2
	I1019 16:22:31.927752    4866 node_conditions.go:105] duration metric: took 2.920052ms to run NodePressure ...
	I1019 16:22:31.927765    4866 start.go:242] waiting for startup goroutines ...
	I1019 16:22:32.105082    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.107279    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.124310    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.399706    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.606530    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.606737    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.708854    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.898221    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.109595    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.109861    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.125102    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.398745    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.608262    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.608507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.628097    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.899011    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.105844    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.107147    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.125592    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.397604    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.606186    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.608057    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.625457    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.897087    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.107145    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.108438    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.124979    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.398697    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.607430    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.607915    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.625565    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.897974    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.106277    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.108361    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.124525    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.397921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.606100    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.607545    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.624847    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.904793    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.106156    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.124800    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.398352    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.605205    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.606315    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.624837    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.900493    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.107408    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.107734    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.124981    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.398574    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.605193    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.607143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.625454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.902955    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.106825    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.108470    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:39.125196    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.398529    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.605226    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.608405    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:39.624602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.900227    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.109708    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.120524    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.152404    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.397916    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.606981    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.607960    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.630183    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.899659    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.107842    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.109598    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.125291    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.399515    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.467820    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:41.625590    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.627302    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.636317    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.935094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.112943    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:42.119807    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.133280    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.398302    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.607609    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.608135    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:42.624856    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.897972    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.107543    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.108103    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.124508    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.212899    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.744991443s)
	W1019 16:22:43.212977    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.213010    4866 retry.go:31] will retry after 17.143581913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.398771    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.605671    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.606760    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.625568    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.898211    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.107334    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.107585    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.124839    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.398155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.623264    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.630826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.631540    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.898171    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.136332    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.137011    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.144118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.399347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.606889    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.608613    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.624967    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.898129    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.107364    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.108901    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.124991    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.398356    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.604603    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.607035    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.625636    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.897793    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:47.105464    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.107485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.124571    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.397975    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:47.606819    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.610486    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.624748    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.897777    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.106946    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.108084    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.125097    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.398669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.604729    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.606798    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.624872    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.898049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.106602    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.107695    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:49.125479    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.397800    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.604975    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.607071    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:49.623535    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.897300    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.107444    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.108542    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.129479    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.398448    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.608541    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.609031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.624682    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.898037    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:51.117857    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.118743    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.125062    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.400207    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:51.612610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.614846    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.627023    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.899475    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.149965    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.150341    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.184388    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.401627    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.605493    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.608204    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.625135    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.898049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.114781    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.115234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.125665    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:53.399350    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.605043    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.607784    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.631207    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:53.897907    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.105820    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:54.107750    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.125160    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.397372    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.606308    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:54.607784    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.625626    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.897756    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.105901    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.108679    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.125659    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:55.397296    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.604708    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.606767    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.624669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:55.897618    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.104880    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.106701    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:56.124653    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.397534    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.605793    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.607228    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:56.624307    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.899643    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.106728    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.108760    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.125568    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:57.397880    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.605302    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.607493    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.624625    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:57.897682    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.108035    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.108152    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:58.125118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.398435    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.609183    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.609582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:58.624558    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.897383    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.106525    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.106743    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.125093    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.398136    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.605762    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.607017    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.624990    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.898231    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:00.108676    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.109105    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.140669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:00.358947    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:00.400026    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:00.607959    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.631587    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.632046    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:00.906499    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.107278    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.108224    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.124582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.397728    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.607933    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.608060    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.633393    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.758114    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.399129964s)
	W1019 16:23:01.758151    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:01.758169    4866 retry.go:31] will retry after 18.757347671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:01.898414    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.107171    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.107618    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.125419    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.397850    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.604818    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.606512    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.628755    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.898367    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.107688    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.107776    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:03.125585    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.398466    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.605223    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.607730    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:03.626165    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.897939    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.105291    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.107560    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.125563    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.397817    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.606783    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.608593    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.625079    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.898372    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.105196    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.107940    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.124921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:05.398171    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.612597    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.613425    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.639006    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:05.898354    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.105061    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.107971    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:06.125433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.397798    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.606787    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.608514    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:06.624865    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.898454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.105570    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.108725    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.125408    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.398612    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.613685    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.614245    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.627824    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.898485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.105023    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:08.106604    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.125162    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.435821    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.604884    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:08.606694    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.624963    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.897500    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.107373    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:09.109639    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.125433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.398779    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.607817    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.609323    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:09.624395    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.897506    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.104693    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.124909    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.397890    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.607501    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.607916    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.627428    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.897768    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.107577    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.107875    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.125063    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:11.398156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.607595    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.608117    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.623974    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:11.898209    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.105895    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.108622    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:12.125066    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.398055    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.606426    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:12.607544    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.625331    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.897922    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.105025    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.107013    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.123908    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:13.398588    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.605525    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.608119    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.624465    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:13.898199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.106104    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.106249    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:14.124087    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.397713    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.604786    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.607078    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:14.623976    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.903062    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.107288    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.107926    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.125569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.398887    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.605330    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.608200    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.624132    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.898401    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.105617    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.106726    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.124637    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:16.397828    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.615422    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.627929    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.629050    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:16.898221    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.105752    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:17.108435    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.124801    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.401858    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.607302    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.607726    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:17.625156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.898056    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.106047    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.107366    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:18.124457    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.398059    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.606917    4866 kapi.go:107] duration metric: took 1m25.003488426s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 16:23:18.607096    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.626267    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.897424    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.105177    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.124502    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:19.400360    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.610410    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.626339    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:19.897450    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.105503    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.124753    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.397719    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.516069    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:20.605394    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.624405    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.898217    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.105075    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.124735    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.398094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.610249    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.616167    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100060724s)
	W1019 16:23:21.616211    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:21.616233    4866 retry.go:31] will retry after 27.141385061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:21.624864    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.898615    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.106659    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.127460    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.397975    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.605492    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.625061    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.898337    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.105307    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.124689    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.398254    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.606110    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.624869    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.897378    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.105272    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.124374    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.397480    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.604414    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.629347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.910080    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.106303    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:25.125100    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.400757    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.605490    4866 kapi.go:107] duration metric: took 1m32.003984723s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 16:23:25.625030    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.898770    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.195244    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.397225    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.625507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.897781    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.126064    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:27.398004    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.625582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:27.898259    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.124930    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.398624    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.625687    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.897690    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.125093    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.398009    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.641980    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.898933    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:30.127104    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.398223    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:30.624420    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.897485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.125600    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.398743    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.625148    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.897828    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.124747    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.397503    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.624856    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.899910    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.124427    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.397944    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.625373    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.898046    4866 kapi.go:107] duration metric: took 1m36.503679093s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 16:23:33.903373    4866 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-567517 cluster.
	I1019 16:23:33.906720    4866 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 16:23:33.909502    4866 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 16:23:34.124552    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:34.625513    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.126106    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.624812    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:36.124526    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:36.625336    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.124962    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.624456    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.124447    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.625734    4866 kapi.go:107] duration metric: took 1m44.504684737s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 16:23:48.757858    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 16:23:49.575413    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:23:49.575511    4866 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 16:23:49.580931    4866 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 16:23:49.583792    4866 addons.go:515] duration metric: took 2m2.266843881s for enable addons: enabled=[ingress-dns nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 16:23:49.583850    4866 start.go:247] waiting for cluster config update ...
	I1019 16:23:49.583874    4866 start.go:256] writing updated cluster config ...
	I1019 16:23:49.584833    4866 ssh_runner.go:195] Run: rm -f paused
	I1019 16:23:49.588681    4866 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:23:49.592501    4866 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t5ksp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.597772    4866 pod_ready.go:94] pod "coredns-66bc5c9577-t5ksp" is "Ready"
	I1019 16:23:49.597805    4866 pod_ready.go:86] duration metric: took 5.275623ms for pod "coredns-66bc5c9577-t5ksp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.600211    4866 pod_ready.go:83] waiting for pod "etcd-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.604650    4866 pod_ready.go:94] pod "etcd-addons-567517" is "Ready"
	I1019 16:23:49.604677    4866 pod_ready.go:86] duration metric: took 4.435712ms for pod "etcd-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.607121    4866 pod_ready.go:83] waiting for pod "kube-apiserver-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.614406    4866 pod_ready.go:94] pod "kube-apiserver-addons-567517" is "Ready"
	I1019 16:23:49.614477    4866 pod_ready.go:86] duration metric: took 7.322007ms for pod "kube-apiserver-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.618184    4866 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.993174    4866 pod_ready.go:94] pod "kube-controller-manager-addons-567517" is "Ready"
	I1019 16:23:49.993203    4866 pod_ready.go:86] duration metric: took 374.9902ms for pod "kube-controller-manager-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.192695    4866 pod_ready.go:83] waiting for pod "kube-proxy-z49jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.592709    4866 pod_ready.go:94] pod "kube-proxy-z49jr" is "Ready"
	I1019 16:23:50.592733    4866 pod_ready.go:86] duration metric: took 400.009367ms for pod "kube-proxy-z49jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.793460    4866 pod_ready.go:83] waiting for pod "kube-scheduler-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:51.192559    4866 pod_ready.go:94] pod "kube-scheduler-addons-567517" is "Ready"
	I1019 16:23:51.192601    4866 pod_ready.go:86] duration metric: took 399.113391ms for pod "kube-scheduler-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:51.192615    4866 pod_ready.go:40] duration metric: took 1.603898841s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:23:51.591062    4866 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 16:23:51.594282    4866 out.go:179] * Done! kubectl is now configured to use "addons-567517" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:26:55 addons-567517 crio[836]: time="2025-10-19T16:26:55.248205867Z" level=info msg="Removed container df0dd4a369038f63eef0b5a69332f53723d687237c0dcd68d4cf0998b21ae549: kube-system/registry-creds-764b6fb674-ngnr2/registry-creds" id=4f4f0c1a-32e5-4562-9627-63e5accb2c35 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.400148982Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-dc7vc/POD" id=938d77d8-ede6-459f-8a1f-0d8f9f24ac44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.400218825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.427582594Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dc7vc Namespace:default ID:4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de UID:2a68e7d0-7d01-46a7-add0-a4feda0a883e NetNS:/var/run/netns/d5e4fa22-ad75-4537-bae7-c7dadc72dbb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a5f860}] Aliases:map[]}"
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.427630126Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-dc7vc to CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.461222639Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dc7vc Namespace:default ID:4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de UID:2a68e7d0-7d01-46a7-add0-a4feda0a883e NetNS:/var/run/netns/d5e4fa22-ad75-4537-bae7-c7dadc72dbb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a5f860}] Aliases:map[]}"
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.464411682Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-dc7vc for CNI network kindnet (type=ptp)"
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.468530154Z" level=info msg="Ran pod sandbox 4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de with infra container: default/hello-world-app-5d498dc89-dc7vc/POD" id=938d77d8-ede6-459f-8a1f-0d8f9f24ac44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.474364473Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4cebeea0-ef33-4c5f-bd28-1b81579f3268 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.474707205Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=4cebeea0-ef33-4c5f-bd28-1b81579f3268 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.474876618Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=4cebeea0-ef33-4c5f-bd28-1b81579f3268 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.477259327Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=cc167a5d-921d-4497-91c9-bb4787873ce5 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:26:57 addons-567517 crio[836]: time="2025-10-19T16:26:57.480283003Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.117267816Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=cc167a5d-921d-4497-91c9-bb4787873ce5 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.117948634Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=97a2b83b-d060-4266-9fb0-8d5f88464075 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.119874166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e2f9eda7-1a5e-44f8-acbf-a87855fa207b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.130795819Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-dc7vc/hello-world-app" id=fe85a956-5c4a-4c6a-bec8-09b9d07e293e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.131534862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.142350372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.142720977Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9141ddf4f3938347832ca2e55431149d187f0067e54fc98349ca4489c951f015/merged/etc/passwd: no such file or directory"
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.14281486Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9141ddf4f3938347832ca2e55431149d187f0067e54fc98349ca4489c951f015/merged/etc/group: no such file or directory"
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.143171516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.172605342Z" level=info msg="Created container c589114e2d135ef8dddcca3093a0c3591cd363e186a8c316643175d98ea07f07: default/hello-world-app-5d498dc89-dc7vc/hello-world-app" id=fe85a956-5c4a-4c6a-bec8-09b9d07e293e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.175685863Z" level=info msg="Starting container: c589114e2d135ef8dddcca3093a0c3591cd363e186a8c316643175d98ea07f07" id=9b125c27-c3e1-4a02-8cf4-558916396af1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 16:26:58 addons-567517 crio[836]: time="2025-10-19T16:26:58.181446828Z" level=info msg="Started container" PID=7302 containerID=c589114e2d135ef8dddcca3093a0c3591cd363e186a8c316643175d98ea07f07 description=default/hello-world-app-5d498dc89-dc7vc/hello-world-app id=9b125c27-c3e1-4a02-8cf4-558916396af1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	c589114e2d135       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   4f45eca94efd9       hello-world-app-5d498dc89-dc7vc             default
	3094b4a7913e2       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             4 seconds ago            Exited              registry-creds                           2                   e331c90cd751a       registry-creds-764b6fb674-ngnr2             kube-system
	b70aa844bb313       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   84089b2c9b57e       nginx                                       default
	185a0d8fde466       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   947bb1a23db79       busybox                                     default
	12ea8dcf61f96       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	b3e64e8c305d3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	4303ea4e21d41       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	82a85755a9b57       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	a1ca6dedcb00c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ff99a7b2d73ca       gcp-auth-78565c9fb4-qw69p                   gcp-auth
	bbc0d449ae5d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	593f4dde7337f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   fba4b87a58bbf       gadget-b4v28                                gadget
	838ce5208b8da       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   4194579cb1297       ingress-nginx-controller-675c5ddd98-n9vqc   ingress-nginx
	43da60e537720       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   a73efc06b00e4       registry-proxy-9vlrb                        kube-system
	1509a0b94cd4f       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   e110a22587053       nvidia-device-plugin-daemonset-s8mrl        kube-system
	d10be64e72568       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   0e83e6ec02cb4       registry-6b586f9694-tf8nq                   kube-system
	eafe11c1243da       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   116e98e9e1bc9       csi-hostpath-attacher-0                     kube-system
	305f495ac25ce       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                    kube-system
	351389adaebd8       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   fa87bce3e62cc       yakd-dashboard-5ff678cb9-9cg5f              yakd-dashboard
	532a5e202b24c       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             4 minutes ago            Exited              patch                                    2                   be62da52415c4       ingress-nginx-admission-patch-g5z8w         ingress-nginx
	40e54317c12f2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   c6a7d17996190       csi-hostpath-resizer-0                      kube-system
	cd9dd5ae64c43       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   74ee3a4fe7030       kube-ingress-dns-minikube                   kube-system
	3e9d456c959c9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   afc5a6690f416       snapshot-controller-7d9fbc56b8-fsjzh        kube-system
	b1f7b13f9f431       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   b4dc81bb2d815       ingress-nginx-admission-create-qdcxz        ingress-nginx
	375a875dfdf02       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   12c70a7e3008b       local-path-provisioner-648f6765c9-klzcv     local-path-storage
	1871e77487146       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   23c809f243f38       metrics-server-85b7d694d7-544h5             kube-system
	530194304d419       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   4171627f0e1a9       snapshot-controller-7d9fbc56b8-tnds8        kube-system
	fe31102d224ff       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   12c7a59df3dbe       cloud-spanner-emulator-86bd5cbb97-wks95     default
	42990e86d93f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   56178b8f2dc7f       coredns-66bc5c9577-t5ksp                    kube-system
	48cf170685f60       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   21afe65583e09       storage-provisioner                         kube-system
	6e17fa2c1568b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   a22d2c4526577       kindnet-2qd77                               kube-system
	d771336608d23       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   2d083856dfc77       kube-proxy-z49jr                            kube-system
	16eba4f0809b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   e13ef3e95863f       kube-scheduler-addons-567517                kube-system
	b0cb46d490358       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   df1ed298095e0       etcd-addons-567517                          kube-system
	60b936e140fc2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   070e2e5a4c033       kube-apiserver-addons-567517                kube-system
	eecd76037af86       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   7ebb43d35321f       kube-controller-manager-addons-567517       kube-system
	
	
	==> coredns [42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287] <==
	[INFO] 10.244.0.18:45703 - 29473 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469059s
	[INFO] 10.244.0.18:45703 - 24787 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000146284s
	[INFO] 10.244.0.18:45703 - 6700 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000101015s
	[INFO] 10.244.0.18:44303 - 63811 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150534s
	[INFO] 10.244.0.18:44303 - 64049 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147703s
	[INFO] 10.244.0.18:45885 - 1546 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104166s
	[INFO] 10.244.0.18:45885 - 1365 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117188s
	[INFO] 10.244.0.18:37068 - 27297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093236s
	[INFO] 10.244.0.18:37068 - 27486 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145078s
	[INFO] 10.244.0.18:58395 - 8969 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00138282s
	[INFO] 10.244.0.18:58395 - 9422 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001451112s
	[INFO] 10.244.0.18:46868 - 4159 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101179s
	[INFO] 10.244.0.18:46868 - 4563 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140983s
	[INFO] 10.244.0.21:49815 - 29555 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000217406s
	[INFO] 10.244.0.21:57694 - 15060 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140687s
	[INFO] 10.244.0.21:45422 - 12553 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104297s
	[INFO] 10.244.0.21:49644 - 22327 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107546s
	[INFO] 10.244.0.21:56916 - 10763 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093367s
	[INFO] 10.244.0.21:35526 - 11433 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100457s
	[INFO] 10.244.0.21:41246 - 15966 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002335715s
	[INFO] 10.244.0.21:42251 - 48755 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002067658s
	[INFO] 10.244.0.21:49567 - 11800 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005187023s
	[INFO] 10.244.0.21:38817 - 60872 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.005522503s
	[INFO] 10.244.0.24:45000 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174352s
	[INFO] 10.244.0.24:43946 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209372s
	
	
	==> describe nodes <==
	Name:               addons-567517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-567517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=addons-567517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_21_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-567517
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-567517"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-567517
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:26:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:26:46 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:26:46 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:26:46 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:26:46 +0000   Sun, 19 Oct 2025 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-567517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                173041d4-c781-472d-8e69-908cdc326432
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     cloud-spanner-emulator-86bd5cbb97-wks95      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  default                     hello-world-app-5d498dc89-dc7vc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-b4v28                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  gcp-auth                    gcp-auth-78565c9fb4-qw69p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-n9vqc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m6s
	  kube-system                 coredns-66bc5c9577-t5ksp                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 csi-hostpathplugin-mgwtr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-addons-567517                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m18s
	  kube-system                 kindnet-2qd77                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m12s
	  kube-system                 kube-apiserver-addons-567517                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-addons-567517        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-z49jr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-addons-567517                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 metrics-server-85b7d694d7-544h5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m7s
	  kube-system                 nvidia-device-plugin-daemonset-s8mrl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 registry-6b586f9694-tf8nq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 registry-creds-764b6fb674-ngnr2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 registry-proxy-9vlrb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-fsjzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-tnds8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  local-path-storage          local-path-provisioner-648f6765c9-klzcv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9cg5f               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m10s                  kube-proxy       
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node addons-567517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node addons-567517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m24s (x8 over 5m24s)  kubelet          Node addons-567517 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node addons-567517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-567517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node addons-567517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m13s                  node-controller  Node addons-567517 event: Registered Node addons-567517 in Controller
	  Normal   NodeReady                4m31s                  kubelet          Node addons-567517 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014509] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499579] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033288] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.729802] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.182201] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 16:21] overlayfs: idmapped layers are currently not supported
	[  +0.059278] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b] <==
	{"level":"warn","ts":"2025-10-19T16:21:37.603196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.618925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.633072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.655752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.672330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.695370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.710930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.727233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.746953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.758348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.781743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.790742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.811273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.826434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.846670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.875804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.900943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.947025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:38.051505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:54.399150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:54.422155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.416869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.431294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.463495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.477951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43096","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a1ca6dedcb00c2720a53738d333bfb129b6f337bce0236fe23a96228cb907986] <==
	2025/10/19 16:23:32 GCP Auth Webhook started!
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:20 Ready to marshal response ...
	2025/10/19 16:24:20 Ready to write response ...
	2025/10/19 16:24:33 Ready to marshal response ...
	2025/10/19 16:24:33 Ready to write response ...
	2025/10/19 16:24:36 Ready to marshal response ...
	2025/10/19 16:24:36 Ready to write response ...
	2025/10/19 16:24:54 Ready to marshal response ...
	2025/10/19 16:24:54 Ready to write response ...
	2025/10/19 16:26:57 Ready to marshal response ...
	2025/10/19 16:26:57 Ready to write response ...
	
	
	==> kernel <==
	 16:26:59 up 9 min,  0 user,  load average: 0.62, 1.27, 0.75
	Linux addons-567517 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb] <==
	I1019 16:24:58.498699       1 main.go:301] handling current node
	I1019 16:25:08.498633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:08.498666       1 main.go:301] handling current node
	I1019 16:25:18.498971       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:18.499145       1 main.go:301] handling current node
	I1019 16:25:28.503107       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:28.503138       1 main.go:301] handling current node
	I1019 16:25:38.498608       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:38.498648       1 main.go:301] handling current node
	I1019 16:25:48.498625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:48.498730       1 main.go:301] handling current node
	I1019 16:25:58.503634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:25:58.503667       1 main.go:301] handling current node
	I1019 16:26:08.506613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:08.506643       1 main.go:301] handling current node
	I1019 16:26:18.498634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:18.498667       1 main.go:301] handling current node
	I1019 16:26:28.502105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:28.502135       1 main.go:301] handling current node
	I1019 16:26:38.502603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:38.502632       1 main.go:301] handling current node
	I1019 16:26:48.497549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:48.497583       1 main.go:301] handling current node
	I1019 16:26:58.497823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:26:58.497873       1 main.go:301] handling current node
	
	
	==> kube-apiserver [60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f] <==
	W1019 16:22:16.477951       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:28.633521       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.633632       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:28.634094       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.634193       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:28.735062       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.735103       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.151122       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:52.152484       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 16:22:52.152649       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 16:22:52.159735       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.160670       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.171680       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.214378       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	I1019 16:22:52.403696       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 16:24:01.060544       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37120: use of closed network connection
	E1019 16:24:01.190491       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37128: use of closed network connection
	I1019 16:24:36.123717       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 16:24:36.529316       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.231.226"}
	I1019 16:24:44.115524       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1019 16:24:45.790698       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1019 16:26:57.272712       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.9.135"}
	
	
	==> kube-controller-manager [eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9] <==
	I1019 16:21:46.414674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:21:46.417531       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 16:21:46.422258       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:21:46.422599       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:46.432559       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:21:46.441602       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:21:46.447125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:21:46.447771       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:21:46.448909       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:21:46.448963       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:21:46.449015       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:21:46.451981       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:21:46.456009       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:21:46.460229       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1019 16:21:52.402968       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 16:22:16.409282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:22:16.409444       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 16:22:16.409483       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 16:22:16.452315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 16:22:16.456572       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 16:22:16.510873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:22:16.557794       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:22:31.412403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1019 16:22:46.516305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:22:46.565526       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9] <==
	I1019 16:21:48.291759       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:21:48.388098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:21:48.490614       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:21:48.490649       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:21:48.490718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:21:48.604805       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:21:48.604859       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:21:48.620607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:21:48.634344       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:21:48.634378       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:21:48.635782       1 config.go:200] "Starting service config controller"
	I1019 16:21:48.635799       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:21:48.635816       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:21:48.635820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:21:48.635838       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:21:48.635842       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:21:48.636464       1 config.go:309] "Starting node config controller"
	I1019 16:21:48.636477       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:21:48.636483       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:21:48.739008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:21:48.739044       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:21:48.739080       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6] <==
	E1019 16:21:39.163634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:39.163740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:21:39.163848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:39.163945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:39.164051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:39.164150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:21:39.164264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:39.164362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:39.164457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 16:21:39.164562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:21:39.164781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:39.164847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:39.174864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 16:21:39.993748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 16:21:40.007045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:40.018198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:40.078145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:40.087052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:21:40.115587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:40.156738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:21:40.191398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 16:21:40.268543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:40.332562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:21:40.370162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1019 16:21:43.404517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:25:42 addons-567517 kubelet[1288]: I1019 16:25:42.607426    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-s8mrl" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:25:49 addons-567517 kubelet[1288]: I1019 16:25:49.608231    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-tf8nq" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:25:58 addons-567517 kubelet[1288]: I1019 16:25:58.606917    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9vlrb" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:38 addons-567517 kubelet[1288]: I1019 16:26:38.809684    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:38 addons-567517 kubelet[1288]: W1019 16:26:38.838858    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/crio-e331c90cd751a415fa666a32b4df691c98ac6c927dd5fa3821806654ca2b38ce WatchSource:0}: Error finding container e331c90cd751a415fa666a32b4df691c98ac6c927dd5fa3821806654ca2b38ce: Status 404 returned error can't find the container with id e331c90cd751a415fa666a32b4df691c98ac6c927dd5fa3821806654ca2b38ce
	Oct 19 16:26:41 addons-567517 kubelet[1288]: I1019 16:26:41.177705    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:41 addons-567517 kubelet[1288]: I1019 16:26:41.177775    1288 scope.go:117] "RemoveContainer" containerID="ccaacadd20be98ed49ed5a0f166f2fa6fa57d0adf7e468895bf1a58494849c48"
	Oct 19 16:26:41 addons-567517 kubelet[1288]: I1019 16:26:41.727348    1288 scope.go:117] "RemoveContainer" containerID="ccaacadd20be98ed49ed5a0f166f2fa6fa57d0adf7e468895bf1a58494849c48"
	Oct 19 16:26:42 addons-567517 kubelet[1288]: I1019 16:26:42.184042    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:42 addons-567517 kubelet[1288]: I1019 16:26:42.184114    1288 scope.go:117] "RemoveContainer" containerID="df0dd4a369038f63eef0b5a69332f53723d687237c0dcd68d4cf0998b21ae549"
	Oct 19 16:26:42 addons-567517 kubelet[1288]: E1019 16:26:42.184293    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-ngnr2_kube-system(171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c)\"" pod="kube-system/registry-creds-764b6fb674-ngnr2" podUID="171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c"
	Oct 19 16:26:43 addons-567517 kubelet[1288]: I1019 16:26:43.187222    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:43 addons-567517 kubelet[1288]: I1019 16:26:43.187281    1288 scope.go:117] "RemoveContainer" containerID="df0dd4a369038f63eef0b5a69332f53723d687237c0dcd68d4cf0998b21ae549"
	Oct 19 16:26:43 addons-567517 kubelet[1288]: E1019 16:26:43.187425    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-ngnr2_kube-system(171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c)\"" pod="kube-system/registry-creds-764b6fb674-ngnr2" podUID="171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c"
	Oct 19 16:26:54 addons-567517 kubelet[1288]: I1019 16:26:54.607886    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:54 addons-567517 kubelet[1288]: I1019 16:26:54.607959    1288 scope.go:117] "RemoveContainer" containerID="df0dd4a369038f63eef0b5a69332f53723d687237c0dcd68d4cf0998b21ae549"
	Oct 19 16:26:55 addons-567517 kubelet[1288]: I1019 16:26:55.228892    1288 scope.go:117] "RemoveContainer" containerID="df0dd4a369038f63eef0b5a69332f53723d687237c0dcd68d4cf0998b21ae549"
	Oct 19 16:26:55 addons-567517 kubelet[1288]: I1019 16:26:55.229102    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ngnr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:55 addons-567517 kubelet[1288]: I1019 16:26:55.229144    1288 scope.go:117] "RemoveContainer" containerID="3094b4a7913e25867202c5968a2e28189eed3370587c14e6e668d7747454c09a"
	Oct 19 16:26:55 addons-567517 kubelet[1288]: E1019 16:26:55.229284    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-ngnr2_kube-system(171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c)\"" pod="kube-system/registry-creds-764b6fb674-ngnr2" podUID="171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c"
	Oct 19 16:26:57 addons-567517 kubelet[1288]: E1019 16:26:57.095274    1288 status_manager.go:1018] "Failed to get status for pod" err="pods \"hello-world-app-5d498dc89-dc7vc\" is forbidden: User \"system:node:addons-567517\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-567517' and this object" podUID="2a68e7d0-7d01-46a7-add0-a4feda0a883e" pod="default/hello-world-app-5d498dc89-dc7vc"
	Oct 19 16:26:57 addons-567517 kubelet[1288]: I1019 16:26:57.273355    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jzf\" (UniqueName: \"kubernetes.io/projected/2a68e7d0-7d01-46a7-add0-a4feda0a883e-kube-api-access-h8jzf\") pod \"hello-world-app-5d498dc89-dc7vc\" (UID: \"2a68e7d0-7d01-46a7-add0-a4feda0a883e\") " pod="default/hello-world-app-5d498dc89-dc7vc"
	Oct 19 16:26:57 addons-567517 kubelet[1288]: I1019 16:26:57.273412    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2a68e7d0-7d01-46a7-add0-a4feda0a883e-gcp-creds\") pod \"hello-world-app-5d498dc89-dc7vc\" (UID: \"2a68e7d0-7d01-46a7-add0-a4feda0a883e\") " pod="default/hello-world-app-5d498dc89-dc7vc"
	Oct 19 16:26:57 addons-567517 kubelet[1288]: W1019 16:26:57.473403    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/crio-4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de WatchSource:0}: Error finding container 4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de: Status 404 returned error can't find the container with id 4f45eca94efd9566018a95450af93074f69aca96f73173060d6fa87e1f6c83de
	Oct 19 16:26:58 addons-567517 kubelet[1288]: I1019 16:26:58.607131    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-tf8nq" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783] <==
	W1019 16:26:34.635221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:36.639550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:36.644334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:38.646900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:38.651460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:40.655974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:40.660319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:42.663552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:42.667871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:44.670617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:44.677474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:46.680813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:46.688628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:48.691708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:48.696231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:50.700404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:50.704873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:52.707687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:52.714382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:54.718176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:54.723083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:56.725965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:56.732889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:58.737216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:26:58.748218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-567517 -n addons-567517
helpers_test.go:269: (dbg) Run:  kubectl --context addons-567517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w: exit status 1 (81.799312ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdcxz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g5z8w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (252.613474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:27:00.864174   14645 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:27:00.864403   14645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:27:00.864432   14645 out.go:374] Setting ErrFile to fd 2...
	I1019 16:27:00.864452   14645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:27:00.864729   14645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:27:00.865093   14645 mustload.go:66] Loading cluster: addons-567517
	I1019 16:27:00.865539   14645 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:27:00.865574   14645 addons.go:607] checking whether the cluster is paused
	I1019 16:27:00.865709   14645 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:27:00.865738   14645 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:27:00.866218   14645 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:27:00.884189   14645 ssh_runner.go:195] Run: systemctl --version
	I1019 16:27:00.884242   14645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:27:00.905880   14645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:27:01.009462   14645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:27:01.009580   14645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:27:01.038932   14645 cri.go:89] found id: "3094b4a7913e25867202c5968a2e28189eed3370587c14e6e668d7747454c09a"
	I1019 16:27:01.038954   14645 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:27:01.038963   14645 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:27:01.038970   14645 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:27:01.038974   14645 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:27:01.038978   14645 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:27:01.038981   14645 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:27:01.038984   14645 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:27:01.038987   14645 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:27:01.038994   14645 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:27:01.038998   14645 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:27:01.039001   14645 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:27:01.039004   14645 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:27:01.039007   14645 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:27:01.039011   14645 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:27:01.039015   14645 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:27:01.039025   14645 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:27:01.039029   14645 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:27:01.039032   14645 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:27:01.039035   14645 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:27:01.039040   14645 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:27:01.039043   14645 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:27:01.039046   14645 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:27:01.039049   14645 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:27:01.039052   14645 cri.go:89] found id: ""
	I1019 16:27:01.039104   14645 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:27:01.054295   14645 out.go:203] 
	W1019 16:27:01.057164   14645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:27:01.057189   14645 out.go:285] * 
	* 
	W1019 16:27:01.061033   14645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:27:01.063980   14645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable ingress --alsologtostderr -v=1: exit status 11 (263.856429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:27:01.122661   14690 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:27:01.122823   14690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:27:01.122833   14690 out.go:374] Setting ErrFile to fd 2...
	I1019 16:27:01.122837   14690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:27:01.123083   14690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:27:01.123350   14690 mustload.go:66] Loading cluster: addons-567517
	I1019 16:27:01.123706   14690 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:27:01.123721   14690 addons.go:607] checking whether the cluster is paused
	I1019 16:27:01.123824   14690 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:27:01.123838   14690 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:27:01.124250   14690 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:27:01.141551   14690 ssh_runner.go:195] Run: systemctl --version
	I1019 16:27:01.141605   14690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:27:01.161549   14690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:27:01.269329   14690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:27:01.269570   14690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:27:01.302925   14690 cri.go:89] found id: "3094b4a7913e25867202c5968a2e28189eed3370587c14e6e668d7747454c09a"
	I1019 16:27:01.302948   14690 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:27:01.302962   14690 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:27:01.302966   14690 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:27:01.302969   14690 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:27:01.302973   14690 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:27:01.302976   14690 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:27:01.302979   14690 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:27:01.302982   14690 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:27:01.302991   14690 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:27:01.302996   14690 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:27:01.302999   14690 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:27:01.303002   14690 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:27:01.303005   14690 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:27:01.303013   14690 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:27:01.303018   14690 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:27:01.303022   14690 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:27:01.303026   14690 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:27:01.303029   14690 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:27:01.303032   14690 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:27:01.303036   14690 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:27:01.303039   14690 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:27:01.303043   14690 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:27:01.303046   14690 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:27:01.303049   14690 cri.go:89] found id: ""
	I1019 16:27:01.303100   14690 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:27:01.318362   14690 out.go:203] 
	W1019 16:27:01.321354   14690 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:27:01.321378   14690 out.go:285] * 
	* 
	W1019 16:27:01.325239   14690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:27:01.328209   14690 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-b4v28" [f5204326-ac17-49aa-82ad-73eff3fc771d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004069384s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (315.420873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:35.515516   12530 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:35.516128   12530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:35.516163   12530 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:35.516183   12530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:35.516493   12530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:35.516844   12530 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:35.517347   12530 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:35.517386   12530 addons.go:607] checking whether the cluster is paused
	I1019 16:24:35.517523   12530 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:35.517553   12530 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:35.518058   12530 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:35.540139   12530 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:35.540202   12530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:35.563983   12530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:35.673258   12530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:35.673347   12530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:35.720286   12530 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:35.720312   12530 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:35.720317   12530 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:35.720321   12530 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:35.720324   12530 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:35.720328   12530 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:35.720333   12530 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:35.720336   12530 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:35.720340   12530 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:35.720346   12530 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:35.720350   12530 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:35.720353   12530 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:35.720356   12530 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:35.720360   12530 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:35.720364   12530 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:35.720369   12530 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:35.720376   12530 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:35.720380   12530 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:35.720383   12530 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:35.720386   12530 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:35.720391   12530 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:35.720394   12530 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:35.720397   12530 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:35.720400   12530 cri.go:89] found id: ""
	I1019 16:24:35.720446   12530 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:35.737454   12530 out.go:203] 
	W1019 16:24:35.740958   12530 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:35.740985   12530 out.go:285] * 
	* 
	W1019 16:24:35.744746   12530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:35.747861   12530 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.70643ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004044872s
addons_test.go:463: (dbg) Run:  kubectl --context addons-567517 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (273.064658ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:29.206328   12385 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:29.206580   12385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:29.206594   12385 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:29.206599   12385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:29.206871   12385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:29.207177   12385 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:29.207543   12385 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:29.207561   12385 addons.go:607] checking whether the cluster is paused
	I1019 16:24:29.207664   12385 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:29.207680   12385 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:29.208121   12385 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:29.233500   12385 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:29.233562   12385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:29.253327   12385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:29.365472   12385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:29.365597   12385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:29.395303   12385 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:29.395326   12385 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:29.395331   12385 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:29.395334   12385 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:29.395338   12385 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:29.395342   12385 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:29.395345   12385 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:29.395348   12385 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:29.395351   12385 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:29.395378   12385 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:29.395387   12385 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:29.395391   12385 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:29.395395   12385 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:29.395398   12385 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:29.395401   12385 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:29.395412   12385 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:29.395419   12385 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:29.395424   12385 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:29.395428   12385 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:29.395431   12385 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:29.395436   12385 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:29.395451   12385 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:29.395456   12385 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:29.395459   12385 cri.go:89] found id: ""
	I1019 16:24:29.395510   12385 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:29.412574   12385 out.go:203] 
	W1019 16:24:29.415872   12385 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:29.415902   12385 out.go:285] * 
	* 
	W1019 16:24:29.419729   12385 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:29.422951   12385 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1019 16:24:23.516287    4111 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1019 16:24:23.521363    4111 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 16:24:23.521396    4111 kapi.go:107] duration metric: took 5.124026ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.134077ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-567517 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-567517 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [953d48a0-deff-4efd-8b6e-1321f33249a9] Pending
helpers_test.go:352: "task-pv-pod" [953d48a0-deff-4efd-8b6e-1321f33249a9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [953d48a0-deff-4efd-8b6e-1321f33249a9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004278505s
addons_test.go:572: (dbg) Run:  kubectl --context addons-567517 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-567517 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-567517 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-567517 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-567517 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-567517 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-567517 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [657d077d-5d5c-4b97-9145-17d5da171bc2] Pending
helpers_test.go:352: "task-pv-pod-restore" [657d077d-5d5c-4b97-9145-17d5da171bc2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [657d077d-5d5c-4b97-9145-17d5da171bc2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003237018s
addons_test.go:614: (dbg) Run:  kubectl --context addons-567517 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-567517 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-567517 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (275.551168ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:25:03.601451   13383 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:25:03.601666   13383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:03.601679   13383 out.go:374] Setting ErrFile to fd 2...
	I1019 16:25:03.601685   13383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:03.601939   13383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:25:03.602210   13383 mustload.go:66] Loading cluster: addons-567517
	I1019 16:25:03.602623   13383 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:03.602641   13383 addons.go:607] checking whether the cluster is paused
	I1019 16:25:03.602749   13383 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:03.602764   13383 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:25:03.603299   13383 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:25:03.631898   13383 ssh_runner.go:195] Run: systemctl --version
	I1019 16:25:03.631948   13383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:25:03.655279   13383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:25:03.761301   13383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:25:03.761388   13383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:25:03.792561   13383 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:25:03.792592   13383 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:25:03.792598   13383 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:25:03.792601   13383 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:25:03.792605   13383 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:25:03.792609   13383 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:25:03.792612   13383 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:25:03.792615   13383 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:25:03.792619   13383 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:25:03.792629   13383 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:25:03.792635   13383 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:25:03.792638   13383 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:25:03.792646   13383 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:25:03.792649   13383 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:25:03.792652   13383 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:25:03.792659   13383 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:25:03.792662   13383 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:25:03.792667   13383 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:25:03.792670   13383 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:25:03.792673   13383 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:25:03.792678   13383 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:25:03.792683   13383 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:25:03.792687   13383 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:25:03.792690   13383 cri.go:89] found id: ""
	I1019 16:25:03.792749   13383 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:25:03.808087   13383 out.go:203] 
	W1019 16:25:03.811636   13383 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:25:03.811660   13383 out.go:285] * 
	* 
	W1019 16:25:03.815501   13383 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:25:03.818350   13383 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (255.123954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:25:03.873632   13425 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:25:03.873902   13425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:03.873916   13425 out.go:374] Setting ErrFile to fd 2...
	I1019 16:25:03.873922   13425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:25:03.874196   13425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:25:03.874601   13425 mustload.go:66] Loading cluster: addons-567517
	I1019 16:25:03.874974   13425 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:03.874991   13425 addons.go:607] checking whether the cluster is paused
	I1019 16:25:03.875095   13425 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:25:03.875111   13425 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:25:03.875542   13425 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:25:03.892518   13425 ssh_runner.go:195] Run: systemctl --version
	I1019 16:25:03.892567   13425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:25:03.910351   13425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:25:04.017649   13425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:25:04.017739   13425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:25:04.047869   13425 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:25:04.047891   13425 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:25:04.047897   13425 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:25:04.047900   13425 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:25:04.047904   13425 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:25:04.047908   13425 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:25:04.047911   13425 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:25:04.047915   13425 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:25:04.047918   13425 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:25:04.047925   13425 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:25:04.047934   13425 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:25:04.047937   13425 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:25:04.047940   13425 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:25:04.047943   13425 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:25:04.047946   13425 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:25:04.047953   13425 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:25:04.047960   13425 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:25:04.047965   13425 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:25:04.047969   13425 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:25:04.047971   13425 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:25:04.047976   13425 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:25:04.047982   13425 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:25:04.047985   13425 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:25:04.047988   13425 cri.go:89] found id: ""
	I1019 16:25:04.048037   13425 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:25:04.063543   13425 out.go:203] 
	W1019 16:25:04.066514   13425 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:25:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:25:04.066634   13425 out.go:285] * 
	* 
	W1019 16:25:04.070515   13425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:25:04.073423   13425 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-567517 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-567517 --alsologtostderr -v=1: exit status 11 (306.120359ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:20.583879   11723 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:20.584156   11723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:20.584188   11723 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:20.584209   11723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:20.584501   11723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:20.584847   11723 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:20.585370   11723 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:20.585412   11723 addons.go:607] checking whether the cluster is paused
	I1019 16:24:20.585568   11723 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:20.585600   11723 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:20.586089   11723 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:20.603849   11723 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:20.603900   11723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:20.626274   11723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:20.728997   11723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:20.729096   11723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:20.758661   11723 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:20.758680   11723 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:20.758685   11723 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:20.758688   11723 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:20.758692   11723 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:20.758696   11723 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:20.758698   11723 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:20.758702   11723 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:20.758705   11723 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:20.758716   11723 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:20.758720   11723 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:20.758723   11723 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:20.758726   11723 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:20.758729   11723 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:20.758732   11723 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:20.758741   11723 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:20.758744   11723 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:20.758749   11723 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:20.758752   11723 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:20.758755   11723 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:20.758760   11723 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:20.758763   11723 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:20.758766   11723 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:20.758769   11723 cri.go:89] found id: ""
	I1019 16:24:20.758819   11723 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:20.775250   11723 out.go:203] 
	W1019 16:24:20.778159   11723 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:20.778181   11723 out.go:285] * 
	* 
	W1019 16:24:20.782049   11723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:20.784894   11723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-567517 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-567517
helpers_test.go:243: (dbg) docker inspect addons-567517:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2",
	        "Created": "2025-10-19T16:21:18.715230834Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5275,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:21:18.779663674Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/hosts",
	        "LogPath": "/var/lib/docker/containers/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2-json.log",
	        "Name": "/addons-567517",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-567517:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-567517",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2",
	                "LowerDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efc6e84c52ed978a519dfd7caa0acba5c4de27e3fd76a98d185a407121365c11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-567517",
	                "Source": "/var/lib/docker/volumes/addons-567517/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-567517",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-567517",
	                "name.minikube.sigs.k8s.io": "addons-567517",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f66e53da837f7ec52a165f3cbc8b47b69a445c1cb1b94ab15cd491c6b2c2d1",
	            "SandboxKey": "/var/run/docker/netns/29f66e53da83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-567517": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:ac:b6:e1:36:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bb00710760e8f44157b720b54e4e9184ba695ef1c209c7eddbcabbeafc2696cc",
	                    "EndpointID": "da1b2e3b8fc8a3076cb97b91e16a3d43c2c9fff01f3db7053a7df18716c62147",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-567517",
	                        "30d4c94890b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-567517 -n addons-567517
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-567517 logs -n 25: (1.664234714s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-860860 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-860860   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-860860                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-860860   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ -o=json --download-only -p download-only-436192 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-436192   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-436192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-436192   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-860860                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-860860   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-436192                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-436192   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ --download-only -p download-docker-893374 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-893374 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p download-docker-893374                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-893374 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ --download-only -p binary-mirror-533416 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-533416   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ -p binary-mirror-533416                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-533416   │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ addons  │ enable dashboard -p addons-567517                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-567517                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ start   │ -p addons-567517 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:23 UTC │
	│ addons  │ addons-567517 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:23 UTC │                     │
	│ addons  │ addons-567517 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ addons-567517 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ ip      │ addons-567517 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-567517 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ ssh     │ addons-567517 ssh cat /opt/local-path-provisioner/pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-567517 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	│ addons  │ enable headlamp -p addons-567517 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-567517          │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:52.711728    4866 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:52.711924    4866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:52.711951    4866 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:52.711968    4866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:52.712356    4866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:20:52.713414    4866 out.go:368] Setting JSON to false
	I1019 16:20:52.714171    4866 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":201,"bootTime":1760890652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:20:52.714270    4866 start.go:143] virtualization:  
	I1019 16:20:52.717648    4866 out.go:179] * [addons-567517] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:20:52.721598    4866 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:20:52.721672    4866 notify.go:221] Checking for updates...
	I1019 16:20:52.727553    4866 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:52.730706    4866 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:20:52.733716    4866 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:20:52.736565    4866 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:20:52.739411    4866 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:20:52.742739    4866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:52.774804    4866 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:20:52.774925    4866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:52.829739    4866 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-19 16:20:52.820362093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:52.829847    4866 docker.go:319] overlay module found
	I1019 16:20:52.832915    4866 out.go:179] * Using the docker driver based on user configuration
	I1019 16:20:52.835882    4866 start.go:309] selected driver: docker
	I1019 16:20:52.835902    4866 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:52.835915    4866 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:20:52.836629    4866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:52.890858    4866 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-19 16:20:52.88168984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:52.891026    4866 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:52.891253    4866 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:20:52.894144    4866 out.go:179] * Using Docker driver with root privileges
	I1019 16:20:52.897018    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:20:52.897082    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:52.897094    4866 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:52.897165    4866 start.go:353] cluster config:
	{Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 16:20:52.902042    4866 out.go:179] * Starting "addons-567517" primary control-plane node in "addons-567517" cluster
	I1019 16:20:52.904809    4866 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:52.907672    4866 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:52.910493    4866 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:52.910567    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:52.910608    4866 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 16:20:52.910617    4866 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:52.910724    4866 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 16:20:52.910732    4866 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:20:52.911064    4866 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json ...
	I1019 16:20:52.911082    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json: {Name:mk491f7cd4580b695ff73a32359e8a6b5d14b00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:20:52.925505    4866 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:52.925629    4866 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:52.925654    4866 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:52.925659    4866 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:52.925667    4866 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:52.925676    4866 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 16:21:11.337906    4866 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 16:21:11.337960    4866 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:21:11.337987    4866 start.go:360] acquireMachinesLock for addons-567517: {Name:mk619b65a6c60e99d51761523a9021973b2a13ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:11.338098    4866 start.go:364] duration metric: took 86.826µs to acquireMachinesLock for "addons-567517"
	I1019 16:21:11.338128    4866 start.go:93] Provisioning new machine with config: &{Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:11.338209    4866 start.go:125] createHost starting for "" (driver="docker")
	I1019 16:21:11.341563    4866 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 16:21:11.341827    4866 start.go:159] libmachine.API.Create for "addons-567517" (driver="docker")
	I1019 16:21:11.341871    4866 client.go:171] LocalClient.Create starting
	I1019 16:21:11.341991    4866 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 16:21:12.354371    4866 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 16:21:12.987114    4866 cli_runner.go:164] Run: docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 16:21:13.007585    4866 cli_runner.go:211] docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 16:21:13.007673    4866 network_create.go:284] running [docker network inspect addons-567517] to gather additional debugging logs...
	I1019 16:21:13.007710    4866 cli_runner.go:164] Run: docker network inspect addons-567517
	W1019 16:21:13.023437    4866 cli_runner.go:211] docker network inspect addons-567517 returned with exit code 1
	I1019 16:21:13.023483    4866 network_create.go:287] error running [docker network inspect addons-567517]: docker network inspect addons-567517: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-567517 not found
	I1019 16:21:13.023498    4866 network_create.go:289] output of [docker network inspect addons-567517]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-567517 not found
	
	** /stderr **
	I1019 16:21:13.023612    4866 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:13.040111    4866 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d8110}
	I1019 16:21:13.040155    4866 network_create.go:124] attempt to create docker network addons-567517 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 16:21:13.040213    4866 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-567517 addons-567517
	I1019 16:21:13.100038    4866 network_create.go:108] docker network addons-567517 192.168.49.0/24 created
	I1019 16:21:13.100070    4866 kic.go:121] calculated static IP "192.168.49.2" for the "addons-567517" container
	I1019 16:21:13.100158    4866 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 16:21:13.115523    4866 cli_runner.go:164] Run: docker volume create addons-567517 --label name.minikube.sigs.k8s.io=addons-567517 --label created_by.minikube.sigs.k8s.io=true
	I1019 16:21:13.133409    4866 oci.go:103] Successfully created a docker volume addons-567517
	I1019 16:21:13.133498    4866 cli_runner.go:164] Run: docker run --rm --name addons-567517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --entrypoint /usr/bin/test -v addons-567517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 16:21:14.178695    4866 cli_runner.go:217] Completed: docker run --rm --name addons-567517-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --entrypoint /usr/bin/test -v addons-567517:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (1.045160364s)
	I1019 16:21:14.178733    4866 oci.go:107] Successfully prepared a docker volume addons-567517
	I1019 16:21:14.178775    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:14.178798    4866 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 16:21:14.178867    4866 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-567517:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 16:21:18.639826    4866 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-567517:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460916175s)
	I1019 16:21:18.639853    4866 kic.go:203] duration metric: took 4.461053231s to extract preloaded images to volume ...
	W1019 16:21:18.640002    4866 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 16:21:18.640124    4866 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 16:21:18.700135    4866 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-567517 --name addons-567517 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-567517 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-567517 --network addons-567517 --ip 192.168.49.2 --volume addons-567517:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 16:21:19.049460    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Running}}
	I1019 16:21:19.074109    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.102480    4866 cli_runner.go:164] Run: docker exec addons-567517 stat /var/lib/dpkg/alternatives/iptables
	I1019 16:21:19.153630    4866 oci.go:144] the created container "addons-567517" has a running status.
	I1019 16:21:19.153659    4866 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa...
	I1019 16:21:19.641201    4866 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 16:21:19.670728    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.702784    4866 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 16:21:19.702804    4866 kic_runner.go:114] Args: [docker exec --privileged addons-567517 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 16:21:19.758209    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:19.780038    4866 machine.go:94] provisionDockerMachine start ...
	I1019 16:21:19.780147    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:19.803281    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:19.803635    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:19.803650    4866 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:21:19.986799    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-567517
	
	I1019 16:21:19.986870    4866 ubuntu.go:182] provisioning hostname "addons-567517"
	I1019 16:21:19.986964    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.023499    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.023814    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.023827    4866 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-567517 && echo "addons-567517" | sudo tee /etc/hostname
	I1019 16:21:20.195445    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-567517
	
	I1019 16:21:20.195526    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.214456    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.214846    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.214870    4866 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-567517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-567517/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-567517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:21:20.374557    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:21:20.374587    4866 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 16:21:20.374619    4866 ubuntu.go:190] setting up certificates
	I1019 16:21:20.374636    4866 provision.go:84] configureAuth start
	I1019 16:21:20.374703    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:20.391399    4866 provision.go:143] copyHostCerts
	I1019 16:21:20.391509    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 16:21:20.391648    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 16:21:20.391717    4866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 16:21:20.391776    4866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.addons-567517 san=[127.0.0.1 192.168.49.2 addons-567517 localhost minikube]
	I1019 16:21:20.606253    4866 provision.go:177] copyRemoteCerts
	I1019 16:21:20.606317    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:21:20.606356    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.623756    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:20.726312    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 16:21:20.743663    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:21:20.763117    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 16:21:20.780382    4866 provision.go:87] duration metric: took 405.732372ms to configureAuth
	I1019 16:21:20.780407    4866 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:21:20.780587    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:20.780696    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:20.798834    4866 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:20.799137    4866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1019 16:21:20.799159    4866 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:21:21.049679    4866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:21:21.049704    4866 machine.go:97] duration metric: took 1.269643093s to provisionDockerMachine
	I1019 16:21:21.049713    4866 client.go:174] duration metric: took 9.707833688s to LocalClient.Create
	I1019 16:21:21.049726    4866 start.go:167] duration metric: took 9.7079012s to libmachine.API.Create "addons-567517"
	I1019 16:21:21.049733    4866 start.go:293] postStartSetup for "addons-567517" (driver="docker")
	I1019 16:21:21.049743    4866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:21:21.049811    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:21:21.049870    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.067757    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.170483    4866 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:21:21.173710    4866 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:21:21.173775    4866 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:21:21.173793    4866 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 16:21:21.173875    4866 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 16:21:21.173908    4866 start.go:296] duration metric: took 124.169347ms for postStartSetup
	I1019 16:21:21.174258    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:21.191175    4866 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/config.json ...
	I1019 16:21:21.191470    4866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:21:21.191519    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.209043    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.311591    4866 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:21:21.316115    4866 start.go:128] duration metric: took 9.977890882s to createHost
	I1019 16:21:21.316141    4866 start.go:83] releasing machines lock for "addons-567517", held for 9.978030089s
	I1019 16:21:21.316208    4866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-567517
	I1019 16:21:21.333298    4866 ssh_runner.go:195] Run: cat /version.json
	I1019 16:21:21.333352    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.333596    4866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:21:21.333660    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:21.360666    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.361439    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:21.548638    4866 ssh_runner.go:195] Run: systemctl --version
	I1019 16:21:21.554778    4866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:21:21.589430    4866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:21:21.594291    4866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:21:21.594357    4866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:21:21.621793    4866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 16:21:21.621866    4866 start.go:496] detecting cgroup driver to use...
	I1019 16:21:21.621914    4866 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 16:21:21.621996    4866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:21:21.638627    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:21:21.650627    4866 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:21:21.650687    4866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:21:21.668231    4866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:21:21.686965    4866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:21:21.799409    4866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:21:21.923747    4866 docker.go:234] disabling docker service ...
	I1019 16:21:21.923813    4866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:21:21.944368    4866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:21:21.957725    4866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:21:22.071050    4866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:21:22.196039    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:21:22.210107    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:21:22.225283    4866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:21:22.225390    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.235383    4866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 16:21:22.235517    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.245098    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.253852    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.262416    4866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:21:22.270372    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.278989    4866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.291894    4866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:21:22.300472    4866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:21:22.307633    4866 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 16:21:22.307723    4866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 16:21:22.321272    4866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:21:22.328669    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:22.438967    4866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:21:22.563633    4866 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:21:22.563723    4866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:21:22.567331    4866 start.go:564] Will wait 60s for crictl version
	I1019 16:21:22.567387    4866 ssh_runner.go:195] Run: which crictl
	I1019 16:21:22.570646    4866 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:21:22.598732    4866 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 16:21:22.598906    4866 ssh_runner.go:195] Run: crio --version
	I1019 16:21:22.626264    4866 ssh_runner.go:195] Run: crio --version
	I1019 16:21:22.656369    4866 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 16:21:22.659355    4866 cli_runner.go:164] Run: docker network inspect addons-567517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:21:22.675413    4866 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 16:21:22.679130    4866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:22.688908    4866 kubeadm.go:884] updating cluster {Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:21:22.689032    4866 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:22.689093    4866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:22.724750    4866 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:22.724773    4866 crio.go:433] Images already preloaded, skipping extraction
	I1019 16:21:22.724826    4866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:21:22.750363    4866 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:21:22.750385    4866 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:21:22.750393    4866 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 16:21:22.750480    4866 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-567517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:21:22.750589    4866 ssh_runner.go:195] Run: crio config
	I1019 16:21:22.812291    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:21:22.812318    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:22.812338    4866 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:21:22.812360    4866 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-567517 NodeName:addons-567517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:21:22.812489    4866 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-567517"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:21:22.812561    4866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:21:22.820127    4866 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:21:22.820189    4866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:21:22.827098    4866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 16:21:22.839347    4866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:21:22.851098    4866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 16:21:22.862912    4866 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:21:22.866654    4866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:21:22.875880    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:22.980104    4866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:22.994860    4866 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517 for IP: 192.168.49.2
	I1019 16:21:22.994884    4866 certs.go:195] generating shared ca certs ...
	I1019 16:21:22.994900    4866 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:22.995016    4866 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 16:21:23.865953    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt ...
	I1019 16:21:23.865982    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt: {Name:mkf27cf70815f99453893555ee6791fe81ad17cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:23.866162    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key ...
	I1019 16:21:23.866175    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key: {Name:mk664244a6bffdbc499971b768334808c7f88ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:23.866249    4866 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 16:21:24.754684    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt ...
	I1019 16:21:24.754715    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt: {Name:mke2b0b8c1c015a719d5f79ce7a9bd1893fcb19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:24.754893    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key ...
	I1019 16:21:24.754908    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key: {Name:mk93e4874429d278bc7d76ec409b752a3dd045e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:24.754982    4866 certs.go:257] generating profile certs ...
	I1019 16:21:24.755059    4866 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key
	I1019 16:21:24.755077    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt with IP's: []
	I1019 16:21:25.391736    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt ...
	I1019 16:21:25.391766    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: {Name:mkbe082a86ad49bca82b3c1e87468b596f96c8d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.391943    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key ...
	I1019 16:21:25.391954    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.key: {Name:mkea240b97b8e09867828145871510e812e090d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.392035    4866 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e
	I1019 16:21:25.392055    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 16:21:25.611487    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e ...
	I1019 16:21:25.611516    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e: {Name:mke2387f21657fa72494aa52dfd2d980b8c2b71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.611683    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e ...
	I1019 16:21:25.611696    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e: {Name:mk04c718a5bc5921681f73c6a363ba3dcda70529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.611777    4866 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt.813a163e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt
	I1019 16:21:25.611858    4866 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key.813a163e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key
	I1019 16:21:25.611912    4866 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key
	I1019 16:21:25.611931    4866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt with IP's: []
	I1019 16:21:25.776507    4866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt ...
	I1019 16:21:25.776534    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt: {Name:mke38ad0e401d7c6e6c8dbba919f6b59c860a004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.776695    4866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key ...
	I1019 16:21:25.776707    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key: {Name:mkfa49a36b011783748654ec04e4f45b988d49fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:25.776893    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 16:21:25.776936    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:21:25.776963    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:21:25.776989    4866 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 16:21:25.777549    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:21:25.795473    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:21:25.812813    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:21:25.830128    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 16:21:25.848798    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 16:21:25.866369    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 16:21:25.884502    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:21:25.901568    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 16:21:25.918882    4866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:21:25.936138    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:21:25.948850    4866 ssh_runner.go:195] Run: openssl version
	I1019 16:21:25.955089    4866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:21:25.963437    4866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:25.967028    4866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:25.967132    4866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:21:26.008214    4866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:21:26.016690    4866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:21:26.020539    4866 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 16:21:26.020588    4866 kubeadm.go:401] StartCluster: {Name:addons-567517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-567517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:26.020684    4866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:21:26.020746    4866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:21:26.052367    4866 cri.go:89] found id: ""
	I1019 16:21:26.052515    4866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:21:26.061344    4866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:21:26.069748    4866 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 16:21:26.069866    4866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:21:26.079334    4866 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 16:21:26.079389    4866 kubeadm.go:158] found existing configuration files:
	
	I1019 16:21:26.079481    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 16:21:26.090069    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 16:21:26.090212    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 16:21:26.101051    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 16:21:26.108714    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 16:21:26.108774    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:21:26.115837    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 16:21:26.123431    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 16:21:26.123502    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:21:26.130768    4866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 16:21:26.137993    4866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 16:21:26.138053    4866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:21:26.145003    4866 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 16:21:26.181734    4866 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 16:21:26.182028    4866 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 16:21:26.209040    4866 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 16:21:26.209119    4866 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 16:21:26.209161    4866 kubeadm.go:319] OS: Linux
	I1019 16:21:26.209213    4866 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 16:21:26.209266    4866 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 16:21:26.209319    4866 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 16:21:26.209373    4866 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 16:21:26.209428    4866 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 16:21:26.209491    4866 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 16:21:26.209542    4866 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 16:21:26.209596    4866 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 16:21:26.209649    4866 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 16:21:26.274752    4866 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 16:21:26.274871    4866 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 16:21:26.274989    4866 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 16:21:26.287534    4866 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 16:21:26.294374    4866 out.go:252]   - Generating certificates and keys ...
	I1019 16:21:26.294486    4866 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 16:21:26.294594    4866 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 16:21:27.377739    4866 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 16:21:27.968860    4866 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 16:21:28.075981    4866 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 16:21:29.363354    4866 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 16:21:29.803554    4866 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 16:21:29.803847    4866 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-567517 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:30.125705    4866 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 16:21:30.126093    4866 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-567517 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 16:21:30.501437    4866 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 16:21:31.057534    4866 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 16:21:31.391564    4866 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 16:21:31.391870    4866 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 16:21:31.853073    4866 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 16:21:32.261384    4866 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 16:21:32.475631    4866 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 16:21:33.059789    4866 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 16:21:34.105422    4866 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 16:21:34.105967    4866 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 16:21:34.110555    4866 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 16:21:34.113953    4866 out.go:252]   - Booting up control plane ...
	I1019 16:21:34.114087    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 16:21:34.114181    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 16:21:34.114710    4866 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 16:21:34.131037    4866 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 16:21:34.131151    4866 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 16:21:34.138891    4866 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 16:21:34.139225    4866 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 16:21:34.139273    4866 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 16:21:34.268691    4866 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 16:21:34.268815    4866 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 16:21:35.269273    4866 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000860818s
	I1019 16:21:35.272968    4866 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 16:21:35.273080    4866 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 16:21:35.273198    4866 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 16:21:35.273314    4866 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 16:21:39.129235    4866 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.856239003s
	I1019 16:21:39.367458    4866 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.093610571s
	I1019 16:21:40.774353    4866 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501321265s
	I1019 16:21:40.797164    4866 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 16:21:40.809056    4866 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 16:21:40.824036    4866 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 16:21:40.824266    4866 kubeadm.go:319] [mark-control-plane] Marking the node addons-567517 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 16:21:40.836358    4866 kubeadm.go:319] [bootstrap-token] Using token: no6kd7.it7lncyyywpjgtmi
	I1019 16:21:40.839580    4866 out.go:252]   - Configuring RBAC rules ...
	I1019 16:21:40.839713    4866 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 16:21:40.845685    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 16:21:40.853824    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 16:21:40.857559    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 16:21:40.861614    4866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 16:21:40.865727    4866 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 16:21:41.182992    4866 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 16:21:41.623722    4866 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 16:21:42.182797    4866 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 16:21:42.184132    4866 kubeadm.go:319] 
	I1019 16:21:42.184231    4866 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 16:21:42.184239    4866 kubeadm.go:319] 
	I1019 16:21:42.184320    4866 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 16:21:42.184352    4866 kubeadm.go:319] 
	I1019 16:21:42.184384    4866 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 16:21:42.184449    4866 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 16:21:42.184512    4866 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 16:21:42.184523    4866 kubeadm.go:319] 
	I1019 16:21:42.184582    4866 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 16:21:42.184591    4866 kubeadm.go:319] 
	I1019 16:21:42.184642    4866 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 16:21:42.184651    4866 kubeadm.go:319] 
	I1019 16:21:42.184706    4866 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 16:21:42.184789    4866 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 16:21:42.184865    4866 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 16:21:42.184874    4866 kubeadm.go:319] 
	I1019 16:21:42.184963    4866 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 16:21:42.185048    4866 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 16:21:42.185057    4866 kubeadm.go:319] 
	I1019 16:21:42.185180    4866 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token no6kd7.it7lncyyywpjgtmi \
	I1019 16:21:42.185294    4866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 16:21:42.185322    4866 kubeadm.go:319] 	--control-plane 
	I1019 16:21:42.185331    4866 kubeadm.go:319] 
	I1019 16:21:42.185479    4866 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 16:21:42.185490    4866 kubeadm.go:319] 
	I1019 16:21:42.185577    4866 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token no6kd7.it7lncyyywpjgtmi \
	I1019 16:21:42.185756    4866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 16:21:42.189426    4866 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 16:21:42.189681    4866 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 16:21:42.189799    4866 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 16:21:42.189821    4866 cni.go:84] Creating CNI manager for ""
	I1019 16:21:42.189831    4866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:21:42.193091    4866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 16:21:42.196438    4866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 16:21:42.201519    4866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 16:21:42.201539    4866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 16:21:42.222936    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 16:21:42.517168    4866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:21:42.517261    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:42.517332    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-567517 minikube.k8s.io/updated_at=2025_10_19T16_21_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=addons-567517 minikube.k8s.io/primary=true
	I1019 16:21:42.684093    4866 ops.go:34] apiserver oom_adj: -16
	I1019 16:21:42.684224    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:43.184260    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:43.684906    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:44.184764    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:44.684274    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:45.185005    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:45.684200    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:46.184434    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:46.685120    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:47.184567    4866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:21:47.315880    4866 kubeadm.go:1114] duration metric: took 4.798678793s to wait for elevateKubeSystemPrivileges
	I1019 16:21:47.315905    4866 kubeadm.go:403] duration metric: took 21.295318862s to StartCluster
	I1019 16:21:47.315921    4866 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:47.316028    4866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:21:47.316403    4866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:47.316578    4866 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:47.316758    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 16:21:47.316924    4866 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 16:21:47.317003    4866 addons.go:70] Setting yakd=true in profile "addons-567517"
	I1019 16:21:47.317017    4866 addons.go:239] Setting addon yakd=true in "addons-567517"
	I1019 16:21:47.317038    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.317551    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.317926    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:47.318144    4866 addons.go:70] Setting inspektor-gadget=true in profile "addons-567517"
	I1019 16:21:47.318177    4866 addons.go:239] Setting addon inspektor-gadget=true in "addons-567517"
	I1019 16:21:47.318249    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.318353    4866 addons.go:70] Setting metrics-server=true in profile "addons-567517"
	I1019 16:21:47.318376    4866 addons.go:239] Setting addon metrics-server=true in "addons-567517"
	I1019 16:21:47.318414    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.318820    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.318852    4866 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-567517"
	I1019 16:21:47.318874    4866 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-567517"
	I1019 16:21:47.318891    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.319267    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.325242    4866 addons.go:70] Setting registry=true in profile "addons-567517"
	I1019 16:21:47.325282    4866 addons.go:239] Setting addon registry=true in "addons-567517"
	I1019 16:21:47.325313    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.325336    4866 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-567517"
	I1019 16:21:47.325356    4866 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-567517"
	I1019 16:21:47.325380    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.325764    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.325799    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.337136    4866 addons.go:70] Setting registry-creds=true in profile "addons-567517"
	I1019 16:21:47.337224    4866 addons.go:239] Setting addon registry-creds=true in "addons-567517"
	I1019 16:21:47.337273    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.337835    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.338665    4866 addons.go:70] Setting cloud-spanner=true in profile "addons-567517"
	I1019 16:21:47.338729    4866 addons.go:239] Setting addon cloud-spanner=true in "addons-567517"
	I1019 16:21:47.338957    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.339449    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.355089    4866 addons.go:70] Setting storage-provisioner=true in profile "addons-567517"
	I1019 16:21:47.355126    4866 addons.go:239] Setting addon storage-provisioner=true in "addons-567517"
	I1019 16:21:47.355167    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.355743    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.357290    4866 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-567517"
	I1019 16:21:47.357394    4866 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-567517"
	I1019 16:21:47.357447    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.357924    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.368140    4866 addons.go:70] Setting default-storageclass=true in profile "addons-567517"
	I1019 16:21:47.368219    4866 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-567517"
	I1019 16:21:47.368313    4866 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-567517"
	I1019 16:21:47.368375    4866 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-567517"
	I1019 16:21:47.368611    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.369823    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.382640    4866 addons.go:70] Setting volcano=true in profile "addons-567517"
	I1019 16:21:47.382729    4866 addons.go:239] Setting addon volcano=true in "addons-567517"
	I1019 16:21:47.382777    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.383864    4866 addons.go:70] Setting gcp-auth=true in profile "addons-567517"
	I1019 16:21:47.383931    4866 mustload.go:66] Loading cluster: addons-567517
	I1019 16:21:47.384158    4866 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:21:47.384438    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.390135    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.397840    4866 addons.go:70] Setting volumesnapshots=true in profile "addons-567517"
	I1019 16:21:47.397964    4866 addons.go:239] Setting addon volumesnapshots=true in "addons-567517"
	I1019 16:21:47.398082    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.402107    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.403046    4866 addons.go:70] Setting ingress=true in profile "addons-567517"
	I1019 16:21:47.403098    4866 addons.go:239] Setting addon ingress=true in "addons-567517"
	I1019 16:21:47.403136    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.403685    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.423271    4866 out.go:179] * Verifying Kubernetes components...
	I1019 16:21:47.427257    4866 addons.go:70] Setting ingress-dns=true in profile "addons-567517"
	I1019 16:21:47.427313    4866 addons.go:239] Setting addon ingress-dns=true in "addons-567517"
	I1019 16:21:47.427359    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.427827    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.428301    4866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:21:47.318832    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.551599    4866 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 16:21:47.551748    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 16:21:47.556566    4866 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:47.556587    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 16:21:47.556653    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.560522    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 16:21:47.560555    4866 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 16:21:47.560629    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.573880    4866 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 16:21:47.575142    4866 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 16:21:47.579202    4866 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 16:21:47.579232    4866 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 16:21:47.579302    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.583584    4866 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:47.583616    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 16:21:47.583683    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.607328    4866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:21:47.611008    4866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:47.611040    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:21:47.611104    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.625819    4866 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 16:21:47.629211    4866 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 16:21:47.652125    4866 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-567517"
	I1019 16:21:47.652165    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.652573    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.661537    4866 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 16:21:47.661731    4866 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 16:21:47.681543    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 16:21:47.681567    4866 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 16:21:47.681627    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.684176    4866 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:47.684200    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 16:21:47.684265    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.704304    4866 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 16:21:47.711018    4866 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 16:21:47.711046    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 16:21:47.711124    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.712587    4866 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:47.712649    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 16:21:47.712729    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.737453    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 16:21:47.737829    4866 addons.go:239] Setting addon default-storageclass=true in "addons-567517"
	I1019 16:21:47.737892    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.738323    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:47.753306    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	W1019 16:21:47.754963    4866 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 16:21:47.757863    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:47.766793    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 16:21:47.766979    4866 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 16:21:47.784782    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 16:21:47.787857    4866 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 16:21:47.788805    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 16:21:47.788821    4866 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 16:21:47.788907    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.794477    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 16:21:47.805592    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:47.809171    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:47.813775    4866 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:47.813842    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 16:21:47.813934    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.839226    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 16:21:47.842355    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 16:21:47.845651    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 16:21:47.848959    4866 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:47.848982    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 16:21:47.849054    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.849283    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.851659    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.852851    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.853270    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.856736    4866 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 16:21:47.862624    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 16:21:47.862653    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 16:21:47.862722    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.872413    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.877828    4866 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 16:21:47.883154    4866 out.go:179]   - Using image docker.io/busybox:stable
	I1019 16:21:47.890839    4866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:47.890863    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 16:21:47.890936    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:47.948493    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.963899    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.970924    4866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 16:21:47.975972    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.984458    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:47.995171    4866 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:47.995194    4866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:21:47.995256    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:48.012684    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.037059    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.050147    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.051811    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	W1019 16:21:48.057616    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.057655    4866 retry.go:31] will retry after 251.674837ms: ssh: handshake failed: EOF
	I1019 16:21:48.065985    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:48.074869    4866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:21:48.075061    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	W1019 16:21:48.078870    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.078900    4866 retry.go:31] will retry after 220.466218ms: ssh: handshake failed: EOF
	W1019 16:21:48.304519    4866 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 16:21:48.304584    4866 retry.go:31] will retry after 465.685346ms: ssh: handshake failed: EOF
	I1019 16:21:48.477083    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 16:21:48.477140    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 16:21:48.607319    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 16:21:48.607392    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 16:21:48.631410    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:21:48.662009    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 16:21:48.662031    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 16:21:48.740843    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 16:21:48.740912    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 16:21:48.780480    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:21:48.791499    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:21:48.812269    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:21:48.813133    4866 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:48.813152    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 16:21:48.816676    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:21:48.820018    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 16:21:48.823711    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 16:21:48.823778    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 16:21:48.874514    4866 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 16:21:48.874655    4866 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 16:21:48.877569    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:21:48.884886    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 16:21:48.884955    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 16:21:48.930284    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:21:48.944738    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:48.983972    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 16:21:48.984042    4866 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 16:21:49.011075    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 16:21:49.011144    4866 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 16:21:49.065310    4866 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 16:21:49.065384    4866 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 16:21:49.099948    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 16:21:49.099973    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 16:21:49.127956    4866 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:49.127974    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 16:21:49.133189    4866 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:49.133215    4866 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 16:21:49.200888    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 16:21:49.200910    4866 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 16:21:49.201877    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 16:21:49.201896    4866 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 16:21:49.260343    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:21:49.269803    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:21:49.299328    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:21:49.347431    4866 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 16:21:49.347457    4866 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 16:21:49.349041    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 16:21:49.349062    4866 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 16:21:49.356416    4866 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:49.356442    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 16:21:49.567747    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 16:21:49.567770    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 16:21:49.615093    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:49.632289    4866 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:49.632313    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 16:21:49.690632    4866 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.719676275s)
	I1019 16:21:49.690662    4866 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 16:21:49.690715    4866 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.615824921s)
	I1019 16:21:49.691445    4866 node_ready.go:35] waiting up to 6m0s for node "addons-567517" to be "Ready" ...
	I1019 16:21:49.882127    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 16:21:49.882201    4866 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 16:21:49.938508    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:21:50.144395    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 16:21:50.144467    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 16:21:50.198104    4866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-567517" context rescaled to 1 replicas
	I1019 16:21:50.364570    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 16:21:50.364590    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 16:21:50.647356    4866 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:21:50.647376    4866 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 16:21:50.952462    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1019 16:21:51.708476    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:52.182625    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.551183299s)
	I1019 16:21:52.782444    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.001932415s)
	I1019 16:21:52.782677    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.991147735s)
	I1019 16:21:52.782711    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.970384021s)
	I1019 16:21:52.782771    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.966029684s)
	I1019 16:21:52.782805    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.962731813s)
	I1019 16:21:53.593344    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.715691722s)
	I1019 16:21:53.593376    4866 addons.go:480] Verifying addon ingress=true in "addons-567517"
	I1019 16:21:53.593557    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.663206816s)
	I1019 16:21:53.593639    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.648829253s)
	W1019 16:21:53.593657    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:53.593678    4866 retry.go:31] will retry after 227.953175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:53.593739    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.333371799s)
	I1019 16:21:53.593751    4866 addons.go:480] Verifying addon metrics-server=true in "addons-567517"
	I1019 16:21:53.593772    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.323948618s)
	I1019 16:21:53.593960    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.294604596s)
	I1019 16:21:53.593990    4866 addons.go:480] Verifying addon registry=true in "addons-567517"
	I1019 16:21:53.594392    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.979268904s)
	W1019 16:21:53.594429    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:53.594443    4866 retry.go:31] will retry after 328.014494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:21:53.594482    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.655885354s)
	I1019 16:21:53.596789    4866 out.go:179] * Verifying ingress addon...
	I1019 16:21:53.598797    4866 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-567517 service yakd-dashboard -n yakd-dashboard
	
	I1019 16:21:53.598854    4866 out.go:179] * Verifying registry addon...
	I1019 16:21:53.601503    4866 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 16:21:53.603429    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 16:21:53.631603    4866 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:21:53.631624    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:53.631835    4866 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 16:21:53.631855    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:53.822220    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:53.923427    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:21:54.114174    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.161666726s)
	I1019 16:21:54.114257    4866 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-567517"
	I1019 16:21:54.116814    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.117369    4866 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 16:21:54.121057    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 16:21:54.122959    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.128164    4866 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:21:54.128234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:54.195873    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:54.607049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:54.607526    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:54.706256    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:54.915226    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.092965325s)
	W1019 16:21:54.915264    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:54.915319    4866 retry.go:31] will retry after 464.844418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:55.106129    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.107031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.125620    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:55.366338    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 16:21:55.366445    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:55.380717    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:55.384285    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:55.516093    4866 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 16:21:55.534672    4866 addons.go:239] Setting addon gcp-auth=true in "addons-567517"
	I1019 16:21:55.534717    4866 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:21:55.535183    4866 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:21:55.561615    4866 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 16:21:55.561665    4866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:21:55.588373    4866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:21:55.607155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:55.607372    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:55.624276    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.105460    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.106923    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:56.124820    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.606469    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:56.615509    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 16:21:56.694898    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:56.706717    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:56.827476    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.90399759s)
	I1019 16:21:56.827564    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.446825626s)
	W1019 16:21:56.827590    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:56.827610    4866 retry.go:31] will retry after 389.198287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:56.827647    4866 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.266013745s)
	I1019 16:21:56.830716    4866 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:21:56.833653    4866 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 16:21:56.836489    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 16:21:56.836518    4866 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 16:21:56.850442    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 16:21:56.850464    4866 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 16:21:56.865017    4866 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:56.865040    4866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 16:21:56.877879    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:21:57.106826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.107312    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.125044    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.217786    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:57.387306    4866 addons.go:480] Verifying addon gcp-auth=true in "addons-567517"
	I1019 16:21:57.390607    4866 out.go:179] * Verifying gcp-auth addon...
	I1019 16:21:57.394367    4866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 16:21:57.408528    4866 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 16:21:57.408602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:57.610341    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:57.611018    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:57.624368    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:57.898249    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.104419    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.106752    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.124634    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:58.150158    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:58.150230    4866 retry.go:31] will retry after 1.068598811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:21:58.397740    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:58.605297    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:58.608174    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:58.624156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:21:58.695229    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:21:58.897269    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.105150    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.106491    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.124400    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.219628    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:21:59.397852    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:21:59.605994    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:21:59.606752    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:21:59.625075    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:21:59.897559    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.201298    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.201526    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.201817    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.238988    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.019299907s)
	W1019 16:22:00.239027    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:00.239061    4866 retry.go:31] will retry after 1.378380059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:00.400895    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:00.604451    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:00.606844    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:00.625234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:00.898132    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.105812    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.107015    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.124913    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:01.194926    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:01.398134    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:01.607235    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:01.615825    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:01.618080    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:01.627203    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:01.898358    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:02.107769    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.108353    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.124997    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.398170    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:02.472961    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.472990    4866 retry.go:31] will retry after 1.262803844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:02.604998    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:02.607120    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:02.625134    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:02.898205    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.104606    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.107178    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.124944    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:03.397521    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:03.605005    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:03.608885    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:03.624991    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:03.694847    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:03.735944    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:03.898451    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:04.106239    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.107907    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.124979    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.398458    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:04.552266    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:04.552349    4866 retry.go:31] will retry after 1.842388176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:04.606177    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:04.606344    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:04.637517    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:04.897422    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.105598    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.105977    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.124759    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:05.397826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:05.604894    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:05.607115    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:05.625141    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:05.695045    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:05.897849    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.106015    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.108445    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.124143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.395444    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:06.398261    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:06.604740    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:06.607409    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:06.625525    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:06.897849    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.106885    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.107441    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.124862    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:07.179504    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:07.179569    4866 retry.go:31] will retry after 5.462748642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:07.397633    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:07.606478    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:07.606185    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:07.624375    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:07.897347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.105682    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.106265    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.124138    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:08.194936    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:08.398014    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:08.605538    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:08.606978    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:08.624972    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:08.900602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.105313    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.107391    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.124094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.398067    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:09.605908    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:09.606095    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:09.624790    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:09.897136    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.105306    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.124507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:10.195200    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:10.397569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:10.604696    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:10.606921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:10.624986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:10.898013    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.106604    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.108142    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.124217    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.397905    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:11.605402    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:11.606848    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:11.624888    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:11.897534    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.106012    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.106942    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.124478    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.397470    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:12.605959    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:12.606405    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:12.624629    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:12.642784    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 16:22:12.695103    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:12.897118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:13.106780    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.107181    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.124465    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.397377    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:13.435214    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:13.435247    4866 retry.go:31] will retry after 7.252097001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:13.605583    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:13.606758    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:13.624569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:13.898328    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.106454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.106680    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.124162    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.397611    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:14.604629    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:14.607649    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:14.624736    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:14.897147    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.106308    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.106994    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.125037    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:15.194887    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:15.397585    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:15.605223    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:15.607407    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:15.624433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:15.897602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.105103    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.107986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.125573    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.397127    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:16.605638    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:16.606199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:16.627214    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:16.897239    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.105256    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.106463    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.124228    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:17.195048    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:17.398100    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:17.605288    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:17.615031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:17.623994    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:17.897768    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.105538    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.107458    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.124553    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.397281    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:18.605005    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:18.606105    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:18.624909    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:18.897579    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.104420    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.106369    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.124570    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:19.398049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:19.606044    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:19.606286    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:19.624777    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:19.694921    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:19.898166    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.105703    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.107838    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.124747    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.397767    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:20.605121    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:20.608841    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:20.624557    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:20.687669    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:20.898658    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:21.105557    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.107855    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.124192    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.398578    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:21.471563    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:21.471593    4866 retry.go:31] will retry after 7.928437037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:21.606038    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:21.607124    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:21.624854    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:21.897497    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.104532    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.106306    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.124046    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:22.194883    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:22.397989    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:22.606298    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:22.607128    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:22.625080    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:22.897556    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.104919    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.107230    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.124351    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.397773    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:23.606144    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:23.606554    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:23.624347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:23.898368    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.105165    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.106452    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.124490    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:24.398305    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:24.605932    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:24.605992    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:24.624716    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:24.694597    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:24.897566    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.104859    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.106823    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.124995    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.397089    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:25.607838    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:25.608073    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:25.624603    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:25.897361    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.105655    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.106199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.123885    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:26.398252    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:26.606325    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:26.607104    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:26.624986    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:26.694684    4866 node_ready.go:57] node "addons-567517" has "Ready":"False" status (will retry)
	I1019 16:22:26.897945    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.106427    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.107658    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.124745    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.398113    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:27.606322    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:27.606438    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:27.624151    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:27.897601    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.105030    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.106911    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.124727    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.397273    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:28.624913    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:28.631039    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:28.690724    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:28.725489    4866 node_ready.go:49] node "addons-567517" is "Ready"
	I1019 16:22:28.725572    4866 node_ready.go:38] duration metric: took 39.034094721s for node "addons-567517" to be "Ready" ...
	I1019 16:22:28.725600    4866 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:22:28.725686    4866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:22:28.751172    4866 api_server.go:72] duration metric: took 41.434565486s to wait for apiserver process to appear ...
	I1019 16:22:28.751244    4866 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:22:28.751276    4866 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 16:22:28.768938    4866 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 16:22:28.771626    4866 api_server.go:141] control plane version: v1.34.1
	I1019 16:22:28.771701    4866 api_server.go:131] duration metric: took 20.436968ms to wait for apiserver health ...
	I1019 16:22:28.771724    4866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:22:28.799796    4866 system_pods.go:59] 19 kube-system pods found
	I1019 16:22:28.799872    4866 system_pods.go:61] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending
	I1019 16:22:28.799893    4866 system_pods.go:61] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:28.799913    4866 system_pods.go:61] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:28.799948    4866 system_pods.go:61] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:28.799973    4866 system_pods.go:61] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:28.799992    4866 system_pods.go:61] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:28.800013    4866 system_pods.go:61] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:28.800032    4866 system_pods.go:61] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:28.800060    4866 system_pods.go:61] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:28.800083    4866 system_pods.go:61] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:28.800102    4866 system_pods.go:61] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:28.800126    4866 system_pods.go:61] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:28.800157    4866 system_pods.go:61] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:28.800182    4866 system_pods.go:61] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:28.800201    4866 system_pods.go:61] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending
	I1019 16:22:28.800221    4866 system_pods.go:61] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:28.800239    4866 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending
	I1019 16:22:28.800266    4866 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:28.800291    4866 system_pods.go:61] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:28.800312    4866 system_pods.go:74] duration metric: took 28.56984ms to wait for pod list to return data ...
	I1019 16:22:28.800334    4866 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:22:28.809955    4866 default_sa.go:45] found service account: "default"
	I1019 16:22:28.810029    4866 default_sa.go:55] duration metric: took 9.674537ms for default service account to be created ...
	I1019 16:22:28.810052    4866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:22:28.817339    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:28.817419    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending
	I1019 16:22:28.817439    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:28.817459    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:28.817498    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:28.817521    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:28.817541    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:28.817577    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:28.817599    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:28.817617    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:28.817638    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:28.817675    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:28.817698    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:28.817716    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:28.817752    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:28.817771    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending
	I1019 16:22:28.817790    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:28.817821    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending
	I1019 16:22:28.817843    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:28.817864    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:28.817909    4866 retry.go:31] will retry after 288.129516ms: missing components: kube-dns
	I1019 16:22:28.938380    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.129781    4866 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:22:29.129805    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.135996    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.136034    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.136041    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending
	I1019 16:22:29.136046    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending
	I1019 16:22:29.136050    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending
	I1019 16:22:29.136053    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.136058    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.136063    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.136068    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.136073    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending
	I1019 16:22:29.136077    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.136081    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.136087    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.136100    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending
	I1019 16:22:29.136109    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.136120    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.136125    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending
	I1019 16:22:29.136132    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.136141    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending
	I1019 16:22:29.136145    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending
	I1019 16:22:29.136159    4866 retry.go:31] will retry after 324.4012ms: missing components: kube-dns
	I1019 16:22:29.136681    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.141131    4866 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:22:29.141155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.400981    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:29.419371    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.470749    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.470788    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.470797    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:29.470807    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:29.470815    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:29.470821    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.470827    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.470832    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.470842    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.470851    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:29.470861    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.470866    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.470872    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.470885    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:29.470894    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.470909    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.470917    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:29.470929    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.470936    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.470947    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:29.470962    4866 retry.go:31] will retry after 439.223247ms: missing components: kube-dns
	I1019 16:22:29.606945    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:29.607532    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:29.624681    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:29.898366    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:29.935080    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:29.935123    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:29.935131    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:29.935140    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:29.935146    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:29.935156    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:29.935162    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:29.935173    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:29.935178    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:29.935185    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:29.935193    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:29.935198    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:29.935204    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:29.935215    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:29.935221    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:29.935228    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:29.935234    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:29.935242    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.935251    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:29.935261    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:29.935276    4866 retry.go:31] will retry after 551.509302ms: missing components: kube-dns
	I1019 16:22:30.109580    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.109716    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.127215    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.397785    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:30.500415    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:30.500454    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:30.500463    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:30.500473    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:30.500480    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:30.500485    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:30.500490    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:30.500495    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:30.500499    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:30.500507    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:30.500524    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:30.500534    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:30.500540    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:30.500547    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:30.500557    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:30.500564    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:30.500576    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:30.500582    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:30.500589    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:30.500593    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:30.500608    4866 retry.go:31] will retry after 537.006592ms: missing components: kube-dns
	I1019 16:22:30.611717    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:30.611932    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:30.625143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:30.851086    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450065736s)
	W1019 16:22:30.851121    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:30.851160    4866 retry.go:31] will retry after 10.616384705s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:30.898291    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.044715    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:31.044756    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:31.044765    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:31.044772    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:31.044781    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:31.044786    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:31.044791    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:31.044797    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:31.044805    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:31.044815    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:31.044825    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:31.044831    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:31.044837    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:31.044850    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:31.044857    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:31.044863    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:31.044872    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:31.044879    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.044891    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.044898    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:31.044914    4866 retry.go:31] will retry after 858.848711ms: missing components: kube-dns
	I1019 16:22:31.146488    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.146678    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.146859    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.398641    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.607398    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:31.608158    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:31.626698    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:31.898121    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:31.908957    4866 system_pods.go:86] 19 kube-system pods found
	I1019 16:22:31.908992    4866 system_pods.go:89] "coredns-66bc5c9577-t5ksp" [265316b1-b0ac-4650-a6a4-ab987e6e512d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:31.909002    4866 system_pods.go:89] "csi-hostpath-attacher-0" [bc9aca6e-eb4c-479b-8510-afc9fb5fdc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:31.909011    4866 system_pods.go:89] "csi-hostpath-resizer-0" [cbbfc31e-1438-4518-9396-74830cb8655d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:31.909018    4866 system_pods.go:89] "csi-hostpathplugin-mgwtr" [57b2f564-ecff-4ea8-87d1-5689e96aae78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:31.909027    4866 system_pods.go:89] "etcd-addons-567517" [e6f7d0c1-1a48-4785-9d63-6f4bafc2b003] Running
	I1019 16:22:31.909032    4866 system_pods.go:89] "kindnet-2qd77" [9c285537-59b6-47a1-ba65-80f19a75cc4e] Running
	I1019 16:22:31.909036    4866 system_pods.go:89] "kube-apiserver-addons-567517" [9a78bb6e-f2d6-48dc-ad85-b86f3b79560e] Running
	I1019 16:22:31.909040    4866 system_pods.go:89] "kube-controller-manager-addons-567517" [4bd38986-3a7a-4225-b0ce-2fc424e8c22a] Running
	I1019 16:22:31.909052    4866 system_pods.go:89] "kube-ingress-dns-minikube" [bd677661-ece4-44ce-8c4a-e47b746cb1fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:31.909056    4866 system_pods.go:89] "kube-proxy-z49jr" [3752d199-ae48-4c90-b0aa-6d946ff98f41] Running
	I1019 16:22:31.909062    4866 system_pods.go:89] "kube-scheduler-addons-567517" [90ef1ed6-27f2-46f4-91e4-f242fccf711a] Running
	I1019 16:22:31.909073    4866 system_pods.go:89] "metrics-server-85b7d694d7-544h5" [78428094-44c9-4706-8713-d51073930d3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:31.909079    4866 system_pods.go:89] "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:31.909089    4866 system_pods.go:89] "registry-6b586f9694-tf8nq" [e702fdd5-8bcb-4900-a8d3-65d7367ff6d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:31.909095    4866 system_pods.go:89] "registry-creds-764b6fb674-ngnr2" [171eb9b7-4bf7-4609-b5d9-1bc1a46d4d9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:31.909105    4866 system_pods.go:89] "registry-proxy-9vlrb" [d9ae9ce3-0038-46ec-9bbc-23586cdba36b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:31.909111    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fsjzh" [ea8d9146-7c26-4e77-864e-46c352f3367f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.909121    4866 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tnds8" [f1981718-6896-490c-943b-926a7b973bbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:31.909132    4866 system_pods.go:89] "storage-provisioner" [8b874171-c4dc-42d3-a74a-a2bfa88903bf] Running
	I1019 16:22:31.909140    4866 system_pods.go:126] duration metric: took 3.099070958s to wait for k8s-apps to be running ...
	I1019 16:22:31.909157    4866 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:22:31.909212    4866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:22:31.924775    4866 system_svc.go:56] duration metric: took 15.61464ms WaitForService to wait for kubelet
	I1019 16:22:31.924809    4866 kubeadm.go:587] duration metric: took 44.608202487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:22:31.924827    4866 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:22:31.927708    4866 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 16:22:31.927740    4866 node_conditions.go:123] node cpu capacity is 2
	I1019 16:22:31.927752    4866 node_conditions.go:105] duration metric: took 2.920052ms to run NodePressure ...
	I1019 16:22:31.927765    4866 start.go:242] waiting for startup goroutines ...
	I1019 16:22:32.105082    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.107279    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.124310    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.399706    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:32.606530    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.606737    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.708854    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:32.898221    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.109595    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.109861    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.125102    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.398745    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:33.608262    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.608507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.628097    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.899011    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.105844    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.107147    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.125592    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.397604    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:34.606186    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.608057    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.625457    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.897087    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.107145    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.108438    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.124979    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.398697    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:35.607430    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.607915    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.625565    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.897974    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.106277    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.108361    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.124525    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.397921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.606100    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.607545    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.624847    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.904793    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.106156    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.124800    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.398352    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.605205    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.606315    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.624837    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.900493    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.107408    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.107734    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.124981    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.398574    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.605193    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.607143    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.625454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.902955    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.106825    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.108470    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:39.125196    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.398529    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.605226    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.608405    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:39.624602    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.900227    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.109708    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.120524    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.152404    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.397916    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.606981    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.607960    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.630183    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.899659    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.107842    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.109598    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.125291    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.399515    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.467820    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:41.625590    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.627302    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.636317    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.935094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.112943    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:42.119807    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.133280    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.398302    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.607609    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.608135    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:42.624856    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.897972    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.107543    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.108103    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.124508    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.212899    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.744991443s)
	W1019 16:22:43.212977    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.213010    4866 retry.go:31] will retry after 17.143581913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.398771    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.605671    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.606760    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.625568    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.898211    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.107334    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.107585    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.124839    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.398155    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.623264    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.630826    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.631540    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.898171    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.136332    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.137011    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.144118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.399347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.606889    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.608613    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.624967    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.898129    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.107364    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.108901    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.124991    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.398356    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.604603    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.607035    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.625636    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.897793    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:47.105464    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.107485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.124571    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.397975    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:47.606819    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.610486    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.624748    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.897777    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.106946    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.108084    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.125097    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.398669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.604729    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.606798    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.624872    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.898049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.106602    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.107695    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:49.125479    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.397800    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.604975    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.607071    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:49.623535    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.897300    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.107444    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.108542    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.129479    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.398448    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.608541    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.609031    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.624682    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.898037    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:51.117857    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.118743    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.125062    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.400207    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:51.612610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.614846    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.627023    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.899475    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.149965    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.150341    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.184388    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.401627    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.605493    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.608204    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.625135    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.898049    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.114781    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.115234    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.125665    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:53.399350    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.605043    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.607784    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.631207    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:53.897907    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.105820    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:54.107750    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.125160    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.397372    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.606308    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:54.607784    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.625626    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.897756    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.105901    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.108679    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.125659    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:55.397296    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.604708    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.606767    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.624669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:55.897618    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.104880    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.106701    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:56.124653    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.397534    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.605793    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.607228    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:56.624307    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.899643    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.106728    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.108760    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.125568    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:57.397880    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.605302    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.607493    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.624625    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:57.897682    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.108035    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.108152    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:58.125118    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.398435    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.609183    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.609582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:58.624558    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.897383    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.106525    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.106743    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.125093    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.398136    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.605762    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.607017    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.624990    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.898231    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:00.108676    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.109105    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.140669    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:00.358947    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:00.400026    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:00.607959    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.631587    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.632046    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:00.906499    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.107278    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.108224    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.124582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.397728    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.607933    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.608060    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.633393    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.758114    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.399129964s)
	W1019 16:23:01.758151    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:01.758169    4866 retry.go:31] will retry after 18.757347671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:01.898414    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.107171    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.107618    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.125419    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.397850    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.604818    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.606512    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.628755    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.898367    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.107688    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.107776    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:03.125585    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.398466    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.605223    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.607730    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:03.626165    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.897939    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.105291    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.107560    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.125563    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.397817    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.606783    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.608593    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.625079    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.898372    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.105196    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.107940    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.124921    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:05.398171    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.612597    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.613425    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.639006    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:05.898354    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.105061    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.107971    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:06.125433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.397798    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.606787    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.608514    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:06.624865    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.898454    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.105570    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.108725    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.125408    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.398612    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.613685    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.614245    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.627824    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.898485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.105023    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:08.106604    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.125162    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.435821    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.604884    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:08.606694    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.624963    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.897500    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.107373    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:09.109639    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.125433    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.398779    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.607817    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.609323    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:09.624395    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.897506    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.104693    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.106610    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.124909    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.397890    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.607501    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.607916    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.627428    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.897768    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.107577    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.107875    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.125063    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:11.398156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.607595    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.608117    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.623974    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:11.898209    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.105895    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.108622    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:12.125066    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.398055    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.606426    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:12.607544    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.625331    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.897922    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.105025    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.107013    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.123908    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:13.398588    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.605525    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.608119    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.624465    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:13.898199    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.106104    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.106249    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:14.124087    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.397713    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.604786    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.607078    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:14.623976    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.903062    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.107288    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.107926    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.125569    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.398887    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.605330    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.608200    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.624132    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.898401    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.105617    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.106726    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.124637    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:16.397828    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.615422    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.627929    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.629050    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:16.898221    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.105752    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:17.108435    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.124801    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.401858    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.607302    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.607726    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:17.625156    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.898056    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.106047    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.107366    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:18.124457    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.398059    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.606917    4866 kapi.go:107] duration metric: took 1m25.003488426s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 16:23:18.607096    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.626267    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.897424    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.105177    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.124502    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:19.400360    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.610410    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.626339    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:19.897450    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.105503    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.124753    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.397719    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.516069    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:20.605394    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.624405    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.898217    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.105075    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.124735    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.398094    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.610249    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.616167    4866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100060724s)
	W1019 16:23:21.616211    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:21.616233    4866 retry.go:31] will retry after 27.141385061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:21.624864    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.898615    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.106659    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.127460    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.397975    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.605492    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.625061    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.898337    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.105307    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.124689    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.398254    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.606110    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.624869    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.897378    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.105272    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.124374    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.397480    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.604414    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.629347    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.910080    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.106303    4866 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:25.125100    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.400757    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.605490    4866 kapi.go:107] duration metric: took 1m32.003984723s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 16:23:25.625030    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.898770    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.195244    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.397225    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.625507    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.897781    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.126064    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:27.398004    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.625582    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:27.898259    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.124930    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.398624    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.625687    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.897690    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.125093    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.398009    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.641980    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.898933    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:30.127104    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.398223    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:30.624420    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.897485    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.125600    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.398743    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.625148    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.897828    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.124747    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.397503    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.624856    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.899910    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.124427    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.397944    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.625373    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.898046    4866 kapi.go:107] duration metric: took 1m36.503679093s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 16:23:33.903373    4866 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-567517 cluster.
	I1019 16:23:33.906720    4866 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 16:23:33.909502    4866 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 16:23:34.124552    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:34.625513    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.126106    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.624812    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:36.124526    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:36.625336    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.124962    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.624456    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.124447    4866 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.625734    4866 kapi.go:107] duration metric: took 1m44.504684737s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 16:23:48.757858    4866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 16:23:49.575413    4866 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 16:23:49.575511    4866 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 16:23:49.580931    4866 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 16:23:49.583792    4866 addons.go:515] duration metric: took 2m2.266843881s for enable addons: enabled=[ingress-dns nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 16:23:49.583850    4866 start.go:247] waiting for cluster config update ...
	I1019 16:23:49.583874    4866 start.go:256] writing updated cluster config ...
	I1019 16:23:49.584833    4866 ssh_runner.go:195] Run: rm -f paused
	I1019 16:23:49.588681    4866 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:23:49.592501    4866 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t5ksp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.597772    4866 pod_ready.go:94] pod "coredns-66bc5c9577-t5ksp" is "Ready"
	I1019 16:23:49.597805    4866 pod_ready.go:86] duration metric: took 5.275623ms for pod "coredns-66bc5c9577-t5ksp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.600211    4866 pod_ready.go:83] waiting for pod "etcd-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.604650    4866 pod_ready.go:94] pod "etcd-addons-567517" is "Ready"
	I1019 16:23:49.604677    4866 pod_ready.go:86] duration metric: took 4.435712ms for pod "etcd-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.607121    4866 pod_ready.go:83] waiting for pod "kube-apiserver-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.614406    4866 pod_ready.go:94] pod "kube-apiserver-addons-567517" is "Ready"
	I1019 16:23:49.614477    4866 pod_ready.go:86] duration metric: took 7.322007ms for pod "kube-apiserver-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.618184    4866 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:49.993174    4866 pod_ready.go:94] pod "kube-controller-manager-addons-567517" is "Ready"
	I1019 16:23:49.993203    4866 pod_ready.go:86] duration metric: took 374.9902ms for pod "kube-controller-manager-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.192695    4866 pod_ready.go:83] waiting for pod "kube-proxy-z49jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.592709    4866 pod_ready.go:94] pod "kube-proxy-z49jr" is "Ready"
	I1019 16:23:50.592733    4866 pod_ready.go:86] duration metric: took 400.009367ms for pod "kube-proxy-z49jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:50.793460    4866 pod_ready.go:83] waiting for pod "kube-scheduler-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:51.192559    4866 pod_ready.go:94] pod "kube-scheduler-addons-567517" is "Ready"
	I1019 16:23:51.192601    4866 pod_ready.go:86] duration metric: took 399.113391ms for pod "kube-scheduler-addons-567517" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:23:51.192615    4866 pod_ready.go:40] duration metric: took 1.603898841s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:23:51.591062    4866 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 16:23:51.594282    4866 out.go:179] * Done! kubectl is now configured to use "addons-567517" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:24:17 addons-567517 crio[836]: time="2025-10-19T16:24:17.651803722Z" level=info msg="Stopped pod sandbox: b2ca3eac25e954af726fb5dcaaf28b4a14e829f7ae33c5c45303bcee590dfcf5" id=edf6cefd-517f-4ee6-b001-311654d23332 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:24:18 addons-567517 crio[836]: time="2025-10-19T16:24:18.626959743Z" level=info msg="Stopping pod sandbox: edffe20e62c597aed36f70f4d780c10759e60067721dadec453f2323954de683" id=64798ed5-2d16-43b4-8841-db66d2d51846 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:24:18 addons-567517 crio[836]: time="2025-10-19T16:24:18.627262802Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:edffe20e62c597aed36f70f4d780c10759e60067721dadec453f2323954de683 UID:1fea0ad2-c837-496a-8b57-5d67c9d3a3fe NetNS:/var/run/netns/1bd8d58a-1bc2-4628-8b08-552e7d35d313 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017d0468}] Aliases:map[]}"
	Oct 19 16:24:18 addons-567517 crio[836]: time="2025-10-19T16:24:18.627428334Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:24:18 addons-567517 crio[836]: time="2025-10-19T16:24:18.657739122Z" level=info msg="Stopped pod sandbox: edffe20e62c597aed36f70f4d780c10759e60067721dadec453f2323954de683" id=64798ed5-2d16-43b4-8841-db66d2d51846 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.477744763Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46/POD" id=c6dea318-f708-4e2d-8d28-02a377e0b243 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.477807959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.511571625Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 Namespace:local-path-storage ID:781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06 UID:b6b004be-5cb5-4aaa-a20e-a92039c66ebf NetNS:/var/run/netns/09a84f76-38d6-47fa-8198-1d071fbce64f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017d09c8}] Aliases:map[]}"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.511613077Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 to CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.526188189Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 Namespace:local-path-storage ID:781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06 UID:b6b004be-5cb5-4aaa-a20e-a92039c66ebf NetNS:/var/run/netns/09a84f76-38d6-47fa-8198-1d071fbce64f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017d09c8}] Aliases:map[]}"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.526358028Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 for CNI network kindnet (type=ptp)"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.531162126Z" level=info msg="Ran pod sandbox 781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06 with infra container: local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46/POD" id=c6dea318-f708-4e2d-8d28-02a377e0b243 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.532871227Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2161237d-e124-43fa-9194-74e07ede70c6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.5343542Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ba0fa472-88a2-4f57-b005-6c1d8db150e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.541764965Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46/helper-pod" id=7bf9227b-7cf4-4c20-a399-b752f75bea05 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.54269911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.549247698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.549858115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.568781649Z" level=info msg="Created container 589d3b9a3f3b6382e71554549283b7b8524f75b7f8bf983466b400b01a5bdb06: local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46/helper-pod" id=7bf9227b-7cf4-4c20-a399-b752f75bea05 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.56993278Z" level=info msg="Starting container: 589d3b9a3f3b6382e71554549283b7b8524f75b7f8bf983466b400b01a5bdb06" id=7bb72668-476b-4c52-bd7b-8e1a0b586124 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 16:24:20 addons-567517 crio[836]: time="2025-10-19T16:24:20.572997862Z" level=info msg="Started container" PID=5523 containerID=589d3b9a3f3b6382e71554549283b7b8524f75b7f8bf983466b400b01a5bdb06 description=local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46/helper-pod id=7bb72668-476b-4c52-bd7b-8e1a0b586124 name=/runtime.v1.RuntimeService/StartContainer sandboxID=781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06
	Oct 19 16:24:21 addons-567517 crio[836]: time="2025-10-19T16:24:21.641920119Z" level=info msg="Stopping pod sandbox: 781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06" id=6b89687d-08f4-4d92-80c5-263c973cd283 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:24:21 addons-567517 crio[836]: time="2025-10-19T16:24:21.642848381Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 Namespace:local-path-storage ID:781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06 UID:b6b004be-5cb5-4aaa-a20e-a92039c66ebf NetNS:/var/run/netns/09a84f76-38d6-47fa-8198-1d071fbce64f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014fcd58}] Aliases:map[]}"
	Oct 19 16:24:21 addons-567517 crio[836]: time="2025-10-19T16:24:21.642994564Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46 from CNI network \"kindnet\" (type=ptp)"
	Oct 19 16:24:21 addons-567517 crio[836]: time="2025-10-19T16:24:21.672567975Z" level=info msg="Stopped pod sandbox: 781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06" id=6b89687d-08f4-4d92-80c5-263c973cd283 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	589d3b9a3f3b6       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   781b5fd3e85b9       helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46   local-path-storage
	eeab86da61166       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   edffe20e62c59       test-local-path                                              default
	1f0f66ce08b69       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          6 seconds ago        Exited              registry-test                            0                   b2ca3eac25e95       registry-test                                                default
	d386ee17f9e7a       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   efdf2c8464516       helper-pod-create-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46   local-path-storage
	185a0d8fde466       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          27 seconds ago       Running             busybox                                  0                   947bb1a23db79       busybox                                                      default
	12ea8dcf61f96       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          44 seconds ago       Running             csi-snapshotter                          0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	b3e64e8c305d3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          45 seconds ago       Running             csi-provisioner                          0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	4303ea4e21d41       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            47 seconds ago       Running             liveness-probe                           0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	82a85755a9b57       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           47 seconds ago       Running             hostpath                                 0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	a1ca6dedcb00c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 49 seconds ago       Running             gcp-auth                                 0                   ff99a7b2d73ca       gcp-auth-78565c9fb4-qw69p                                    gcp-auth
	bbc0d449ae5d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                52 seconds ago       Running             node-driver-registrar                    0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	593f4dde7337f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            53 seconds ago       Running             gadget                                   0                   fba4b87a58bbf       gadget-b4v28                                                 gadget
	838ce5208b8da       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             57 seconds ago       Running             controller                               0                   4194579cb1297       ingress-nginx-controller-675c5ddd98-n9vqc                    ingress-nginx
	43da60e537720       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   a73efc06b00e4       registry-proxy-9vlrb                                         kube-system
	1509a0b94cd4f       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   e110a22587053       nvidia-device-plugin-daemonset-s8mrl                         kube-system
	7b6e1fc916e5a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    3                   7254654e6b5b5       gcp-auth-certs-patch-8f4hm                                   gcp-auth
	d10be64e72568       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   0e83e6ec02cb4       registry-6b586f9694-tf8nq                                    kube-system
	eafe11c1243da       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   116e98e9e1bc9       csi-hostpath-attacher-0                                      kube-system
	305f495ac25ce       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   7aa1eaad9746e       csi-hostpathplugin-mgwtr                                     kube-system
	351389adaebd8       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   fa87bce3e62cc       yakd-dashboard-5ff678cb9-9cg5f                               yakd-dashboard
	532a5e202b24c       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    2                   be62da52415c4       ingress-nginx-admission-patch-g5z8w                          ingress-nginx
	40e54317c12f2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   c6a7d17996190       csi-hostpath-resizer-0                                       kube-system
	cd9dd5ae64c43       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   74ee3a4fe7030       kube-ingress-dns-minikube                                    kube-system
	3e9d456c959c9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   afc5a6690f416       snapshot-controller-7d9fbc56b8-fsjzh                         kube-system
	b1f7b13f9f431       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   b4dc81bb2d815       ingress-nginx-admission-create-qdcxz                         ingress-nginx
	375a875dfdf02       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   12c70a7e3008b       local-path-provisioner-648f6765c9-klzcv                      local-path-storage
	1871e77487146       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   23c809f243f38       metrics-server-85b7d694d7-544h5                              kube-system
	530194304d419       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   4171627f0e1a9       snapshot-controller-7d9fbc56b8-tnds8                         kube-system
	fe31102d224ff       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   12c7a59df3dbe       cloud-spanner-emulator-86bd5cbb97-wks95                      default
	42990e86d93f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   56178b8f2dc7f       coredns-66bc5c9577-t5ksp                                     kube-system
	48cf170685f60       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   21afe65583e09       storage-provisioner                                          kube-system
	6e17fa2c1568b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   a22d2c4526577       kindnet-2qd77                                                kube-system
	d771336608d23       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   2d083856dfc77       kube-proxy-z49jr                                             kube-system
	16eba4f0809b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   e13ef3e95863f       kube-scheduler-addons-567517                                 kube-system
	b0cb46d490358       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   df1ed298095e0       etcd-addons-567517                                           kube-system
	60b936e140fc2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   070e2e5a4c033       kube-apiserver-addons-567517                                 kube-system
	eecd76037af86       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   7ebb43d35321f       kube-controller-manager-addons-567517                        kube-system
	
	
	==> coredns [42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287] <==
	[INFO] 10.244.0.18:45703 - 29473 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469059s
	[INFO] 10.244.0.18:45703 - 24787 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000146284s
	[INFO] 10.244.0.18:45703 - 6700 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000101015s
	[INFO] 10.244.0.18:44303 - 63811 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150534s
	[INFO] 10.244.0.18:44303 - 64049 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147703s
	[INFO] 10.244.0.18:45885 - 1546 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104166s
	[INFO] 10.244.0.18:45885 - 1365 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117188s
	[INFO] 10.244.0.18:37068 - 27297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093236s
	[INFO] 10.244.0.18:37068 - 27486 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145078s
	[INFO] 10.244.0.18:58395 - 8969 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00138282s
	[INFO] 10.244.0.18:58395 - 9422 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001451112s
	[INFO] 10.244.0.18:46868 - 4159 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101179s
	[INFO] 10.244.0.18:46868 - 4563 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140983s
	[INFO] 10.244.0.21:49815 - 29555 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000217406s
	[INFO] 10.244.0.21:57694 - 15060 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140687s
	[INFO] 10.244.0.21:45422 - 12553 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104297s
	[INFO] 10.244.0.21:49644 - 22327 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107546s
	[INFO] 10.244.0.21:56916 - 10763 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093367s
	[INFO] 10.244.0.21:35526 - 11433 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100457s
	[INFO] 10.244.0.21:41246 - 15966 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002335715s
	[INFO] 10.244.0.21:42251 - 48755 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002067658s
	[INFO] 10.244.0.21:49567 - 11800 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005187023s
	[INFO] 10.244.0.21:38817 - 60872 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.005522503s
	[INFO] 10.244.0.24:45000 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174352s
	[INFO] 10.244.0.24:43946 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209372s
	
	
	==> describe nodes <==
	Name:               addons-567517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-567517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=addons-567517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_21_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-567517
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-567517"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-567517
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:24:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:24:14 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:24:14 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:24:14 +0000   Sun, 19 Oct 2025 16:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:24:14 +0000   Sun, 19 Oct 2025 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-567517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                173041d4-c781-472d-8e69-908cdc326432
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     cloud-spanner-emulator-86bd5cbb97-wks95      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gadget                      gadget-b4v28                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-78565c9fb4-qw69p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-n9vqc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m29s
	  kube-system                 coredns-66bc5c9577-t5ksp                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m35s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 csi-hostpathplugin-mgwtr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 etcd-addons-567517                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m41s
	  kube-system                 kindnet-2qd77                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m35s
	  kube-system                 kube-apiserver-addons-567517                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-controller-manager-addons-567517        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-z49jr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-addons-567517                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 metrics-server-85b7d694d7-544h5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m30s
	  kube-system                 nvidia-device-plugin-daemonset-s8mrl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 registry-6b586f9694-tf8nq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 registry-creds-764b6fb674-ngnr2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 registry-proxy-9vlrb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 snapshot-controller-7d9fbc56b8-fsjzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-tnds8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  local-path-storage          local-path-provisioner-648f6765c9-klzcv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9cg5f               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m33s                  kube-proxy       
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node addons-567517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node addons-567517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x8 over 2m47s)  kubelet          Node addons-567517 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m41s                  kubelet          Node addons-567517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s                  kubelet          Node addons-567517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s                  kubelet          Node addons-567517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m36s                  node-controller  Node addons-567517 event: Registered Node addons-567517 in Controller
	  Normal   NodeReady                114s                   kubelet          Node addons-567517 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014509] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499579] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033288] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.729802] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.182201] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 16:21] overlayfs: idmapped layers are currently not supported
	[  +0.059278] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b] <==
	{"level":"warn","ts":"2025-10-19T16:21:37.603196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.618925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.633072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.655752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.672330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.695370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.710930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.727233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.746953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.758348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.781743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.790742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.811273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.826434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.846670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.875804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.900943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:37.947025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:38.051505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:54.399150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:21:54.422155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.416869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.431294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.463495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:22:16.477951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43096","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a1ca6dedcb00c2720a53738d333bfb129b6f337bce0236fe23a96228cb907986] <==
	2025/10/19 16:23:32 GCP Auth Webhook started!
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:23:52 Ready to marshal response ...
	2025/10/19 16:23:52 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:12 Ready to marshal response ...
	2025/10/19 16:24:12 Ready to write response ...
	2025/10/19 16:24:20 Ready to marshal response ...
	2025/10/19 16:24:20 Ready to write response ...
	
	
	==> kernel <==
	 16:24:22 up 6 min,  0 user,  load average: 1.76, 1.67, 0.75
	Linux addons-567517 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb] <==
	E1019 16:22:19.898649       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1019 16:22:28.502929       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:28.502966       1 main.go:301] handling current node
	I1019 16:22:38.497930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:38.497973       1 main.go:301] handling current node
	I1019 16:22:48.500501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:48.500537       1 main.go:301] handling current node
	I1019 16:22:58.498314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:22:58.498344       1 main.go:301] handling current node
	I1019 16:23:08.502021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:08.502052       1 main.go:301] handling current node
	I1019 16:23:18.497361       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:18.497393       1 main.go:301] handling current node
	I1019 16:23:28.499308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:28.499362       1 main.go:301] handling current node
	I1019 16:23:38.498241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:38.498404       1 main.go:301] handling current node
	I1019 16:23:48.497468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:48.497497       1 main.go:301] handling current node
	I1019 16:23:58.498683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:23:58.498721       1 main.go:301] handling current node
	I1019 16:24:08.504482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:08.504514       1 main.go:301] handling current node
	I1019 16:24:18.498213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:24:18.498245       1 main.go:301] handling current node
	
	
	==> kube-apiserver [60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f] <==
	W1019 16:21:54.415018       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1019 16:21:57.240057       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.119.53"}
	W1019 16:22:16.416714       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:16.431282       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:16.463285       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:16.477951       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 16:22:28.633521       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.633632       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:28.634094       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.634193       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:28.735062       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.119.53:443: connect: connection refused
	E1019 16:22:28.735103       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.119.53:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.151122       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	W1019 16:22:52.152484       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 16:22:52.152649       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 16:22:52.159735       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.160670       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.171680       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	E1019 16:22:52.214378       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.52.6:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.52.6:443: connect: connection refused" logger="UnhandledError"
	I1019 16:22:52.403696       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 16:24:01.060544       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37120: use of closed network connection
	E1019 16:24:01.190491       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37128: use of closed network connection
	
	
	==> kube-controller-manager [eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9] <==
	I1019 16:21:46.414674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:21:46.417531       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 16:21:46.422258       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:21:46.422599       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:21:46.432559       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:21:46.441602       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:21:46.447125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:21:46.447771       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:21:46.448909       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:21:46.448963       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:21:46.449015       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:21:46.451981       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:21:46.456009       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:21:46.460229       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1019 16:21:52.402968       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 16:22:16.409282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:22:16.409444       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 16:22:16.409483       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 16:22:16.452315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 16:22:16.456572       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 16:22:16.510873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:22:16.557794       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:22:31.412403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1019 16:22:46.516305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 16:22:46.565526       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9] <==
	I1019 16:21:48.291759       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:21:48.388098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:21:48.490614       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:21:48.490649       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:21:48.490718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:21:48.604805       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:21:48.604859       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:21:48.620607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:21:48.634344       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:21:48.634378       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:21:48.635782       1 config.go:200] "Starting service config controller"
	I1019 16:21:48.635799       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:21:48.635816       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:21:48.635820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:21:48.635838       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:21:48.635842       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:21:48.636464       1 config.go:309] "Starting node config controller"
	I1019 16:21:48.636477       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:21:48.636483       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:21:48.739008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:21:48.739044       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:21:48.739080       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6] <==
	E1019 16:21:39.163634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:21:39.163740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:21:39.163848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:39.163945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:39.164051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:39.164150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:21:39.164264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:21:39.164362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:39.164457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 16:21:39.164562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:21:39.164781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:39.164847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:21:39.174864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 16:21:39.993748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 16:21:40.007045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:21:40.018198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:21:40.078145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:21:40.087052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:21:40.115587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:21:40.156738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:21:40.191398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 16:21:40.268543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:21:40.332562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:21:40.370162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1019 16:21:43.404517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:24:18 addons-567517 kubelet[1288]: I1019 16:24:18.775742    1288 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1fea0ad2-c837-496a-8b57-5d67c9d3a3fe-gcp-creds\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:18 addons-567517 kubelet[1288]: I1019 16:24:18.775792    1288 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-khmlk\" (UniqueName: \"kubernetes.io/projected/1fea0ad2-c837-496a-8b57-5d67c9d3a3fe-kube-api-access-khmlk\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:18 addons-567517 kubelet[1288]: I1019 16:24:18.775806    1288 reconciler_common.go:299] "Volume detached for volume \"pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" (UniqueName: \"kubernetes.io/host-path/1fea0ad2-c837-496a-8b57-5d67c9d3a3fe-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:19 addons-567517 kubelet[1288]: I1019 16:24:19.610345    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba7aaf2-4102-475d-8693-56f00e087529" path="/var/lib/kubelet/pods/5ba7aaf2-4102-475d-8693-56f00e087529/volumes"
	Oct 19 16:24:19 addons-567517 kubelet[1288]: I1019 16:24:19.632453    1288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edffe20e62c597aed36f70f4d780c10759e60067721dadec453f2323954de683"
	Oct 19 16:24:20 addons-567517 kubelet[1288]: I1019 16:24:20.187359    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj64\" (UniqueName: \"kubernetes.io/projected/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-kube-api-access-9zj64\") pod \"helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") " pod="local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46"
	Oct 19 16:24:20 addons-567517 kubelet[1288]: I1019 16:24:20.187420    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-data\") pod \"helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") " pod="local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46"
	Oct 19 16:24:20 addons-567517 kubelet[1288]: I1019 16:24:20.187465    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-script\") pod \"helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") " pod="local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46"
	Oct 19 16:24:20 addons-567517 kubelet[1288]: I1019 16:24:20.187488    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-gcp-creds\") pod \"helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") " pod="local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46"
	Oct 19 16:24:20 addons-567517 kubelet[1288]: W1019 16:24:20.528904    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30d4c94890b4bf08fcabe78a597ca4d22aeceeeb974374dfd772dbbccb8ed0d2/crio-781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06 WatchSource:0}: Error finding container 781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06: Status 404 returned error can't find the container with id 781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.621254    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fea0ad2-c837-496a-8b57-5d67c9d3a3fe" path="/var/lib/kubelet/pods/1fea0ad2-c837-496a-8b57-5d67c9d3a3fe/volumes"
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708250    1288 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-script\") pod \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") "
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708335    1288 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zj64\" (UniqueName: \"kubernetes.io/projected/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-kube-api-access-9zj64\") pod \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") "
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708369    1288 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-gcp-creds\") pod \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") "
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708419    1288 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-data\") pod \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\" (UID: \"b6b004be-5cb5-4aaa-a20e-a92039c66ebf\") "
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708582    1288 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-data" (OuterVolumeSpecName: "data") pod "b6b004be-5cb5-4aaa-a20e-a92039c66ebf" (UID: "b6b004be-5cb5-4aaa-a20e-a92039c66ebf"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.708930    1288 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-script" (OuterVolumeSpecName: "script") pod "b6b004be-5cb5-4aaa-a20e-a92039c66ebf" (UID: "b6b004be-5cb5-4aaa-a20e-a92039c66ebf"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.709197    1288 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b6b004be-5cb5-4aaa-a20e-a92039c66ebf" (UID: "b6b004be-5cb5-4aaa-a20e-a92039c66ebf"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.713558    1288 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-kube-api-access-9zj64" (OuterVolumeSpecName: "kube-api-access-9zj64") pod "b6b004be-5cb5-4aaa-a20e-a92039c66ebf" (UID: "b6b004be-5cb5-4aaa-a20e-a92039c66ebf"). InnerVolumeSpecName "kube-api-access-9zj64". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.809626    1288 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-data\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.809664    1288 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-script\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.809715    1288 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zj64\" (UniqueName: \"kubernetes.io/projected/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-kube-api-access-9zj64\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:21 addons-567517 kubelet[1288]: I1019 16:24:21.809728    1288 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b6b004be-5cb5-4aaa-a20e-a92039c66ebf-gcp-creds\") on node \"addons-567517\" DevicePath \"\""
	Oct 19 16:24:22 addons-567517 kubelet[1288]: I1019 16:24:22.660621    1288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781b5fd3e85b9d8db090c0b3a4c17c3694a18350666d1e8fb8b1afa215693d06"
	Oct 19 16:24:22 addons-567517 kubelet[1288]: E1019 16:24:22.666165    1288 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46\" is forbidden: User \"system:node:addons-567517\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-567517' and this object" podUID="b6b004be-5cb5-4aaa-a20e-a92039c66ebf" pod="local-path-storage/helper-pod-delete-pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46"
	
	
	==> storage-provisioner [48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783] <==
	W1019 16:23:57.791063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:23:59.795128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:23:59.802161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:01.805634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:01.810162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:03.813498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:03.818697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:05.821288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:05.825900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:07.829485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:07.834510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:09.837715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:09.842654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:11.846460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:11.855908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:13.859773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:13.864722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:15.868785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:15.874097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:17.877126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:17.881903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:19.885766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:19.891421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:21.894667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:24:21.902320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-567517 -n addons-567517
helpers_test.go:269: (dbg) Run:  kubectl --context addons-567517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w registry-creds-764b6fb674-ngnr2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w registry-creds-764b6fb674-ngnr2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w registry-creds-764b6fb674-ngnr2: exit status 1 (118.234567ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdcxz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g5z8w" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ngnr2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-567517 describe pod ingress-nginx-admission-create-qdcxz ingress-nginx-admission-patch-g5z8w registry-creds-764b6fb674-ngnr2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable headlamp --alsologtostderr -v=1: exit status 11 (289.713223ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:23.833083   12297 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:23.833266   12297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:23.833279   12297 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:23.833285   12297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:23.833530   12297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:23.833816   12297 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:23.834210   12297 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:23.834228   12297 addons.go:607] checking whether the cluster is paused
	I1019 16:24:23.834336   12297 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:23.834351   12297 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:23.834877   12297 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:23.868215   12297 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:23.868284   12297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:23.896498   12297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:24.003411   12297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:24.003588   12297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:24.035596   12297 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:24.035622   12297 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:24.035628   12297 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:24.035632   12297 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:24.035640   12297 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:24.035644   12297 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:24.035656   12297 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:24.035660   12297 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:24.035681   12297 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:24.035694   12297 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:24.035708   12297 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:24.035719   12297 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:24.035744   12297 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:24.035754   12297 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:24.035758   12297 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:24.035763   12297 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:24.035788   12297 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:24.035795   12297 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:24.035799   12297 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:24.035824   12297 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:24.035836   12297 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:24.035845   12297 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:24.035849   12297 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:24.035855   12297 cri.go:89] found id: ""
	I1019 16:24:24.035932   12297 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:24.052222   12297 out.go:203] 
	W1019 16:24:24.055236   12297 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:24.055266   12297 out.go:285] * 
	* 
	W1019 16:24:24.059112   12297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:24.062062   12297 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-wks95" [ae3ba144-35ae-476a-a9ec-7c9a5fefd96a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003815434s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (357.747587ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:23.250699   12176 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:23.250980   12176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:23.250990   12176 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:23.251001   12176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:23.251312   12176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:23.251634   12176 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:23.252034   12176 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:23.252054   12176 addons.go:607] checking whether the cluster is paused
	I1019 16:24:23.252187   12176 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:23.252203   12176 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:23.255089   12176 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:23.284978   12176 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:23.285039   12176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:23.306515   12176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:23.421606   12176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:23.421713   12176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:23.472831   12176 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:23.472849   12176 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:23.472854   12176 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:23.472858   12176 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:23.472862   12176 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:23.472865   12176 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:23.472869   12176 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:23.472872   12176 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:23.472875   12176 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:23.472882   12176 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:23.472885   12176 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:23.472888   12176 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:23.472891   12176 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:23.472895   12176 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:23.472897   12176 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:23.472903   12176 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:23.472906   12176 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:23.472910   12176 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:23.472913   12176 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:23.472916   12176 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:23.472925   12176 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:23.472928   12176 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:23.472931   12176 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:23.472934   12176 cri.go:89] found id: ""
	I1019 16:24:23.472984   12176 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:23.497368   12176 out.go:203] 
	W1019 16:24:23.500293   12176 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:23.500332   12176 out.go:285] * 
	* 
	W1019 16:24:23.505820   12176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:23.508842   12176 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-567517 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-567517 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1fea0ad2-c837-496a-8b57-5d67c9d3a3fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1fea0ad2-c837-496a-8b57-5d67c9d3a3fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1fea0ad2-c837-496a-8b57-5d67c9d3a3fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003920277s
addons_test.go:967: (dbg) Run:  kubectl --context addons-567517 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 ssh "cat /opt/local-path-provisioner/pvc-234e9220-ca42-4ab4-a29e-e83434dd6a46_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-567517 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-567517 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (259.974125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:20.268951   11678 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:20.269195   11678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:20.269224   11678 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:20.269243   11678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:20.269519   11678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:20.269847   11678 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:20.270267   11678 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:20.270305   11678 addons.go:607] checking whether the cluster is paused
	I1019 16:24:20.270449   11678 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:20.270480   11678 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:20.270999   11678 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:20.297629   11678 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:20.297786   11678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:20.315068   11678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:20.417039   11678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:20.417124   11678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:20.448709   11678 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:20.448736   11678 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:20.448741   11678 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:20.448745   11678 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:20.448748   11678 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:20.448752   11678 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:20.448760   11678 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:20.448764   11678 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:20.448768   11678 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:20.448775   11678 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:20.448778   11678 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:20.448782   11678 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:20.448786   11678 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:20.448795   11678 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:20.448799   11678 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:20.448809   11678 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:20.448819   11678 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:20.448827   11678 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:20.448831   11678 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:20.448834   11678 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:20.448843   11678 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:20.448846   11678 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:20.448849   11678 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:20.448853   11678 cri.go:89] found id: ""
	I1019 16:24:20.448906   11678 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:20.465019   11678 out.go:203] 
	W1019 16:24:20.467949   11678 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:20.467978   11678 out.go:285] * 
	* 
	W1019 16:24:20.471901   11678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:20.474868   11678 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-s8mrl" [655707ac-d6c0-496e-a8c4-732f650cac79] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004123971s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (278.739744ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:11.808568   11219 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:11.808744   11219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:11.808759   11219 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:11.808765   11219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:11.809030   11219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:11.809319   11219 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:11.810152   11219 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:11.810178   11219 addons.go:607] checking whether the cluster is paused
	I1019 16:24:11.810299   11219 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:11.810314   11219 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:11.810804   11219 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:11.834575   11219 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:11.834641   11219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:11.856365   11219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:11.960874   11219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:11.960961   11219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:11.989200   11219 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:11.989219   11219 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:11.989228   11219 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:11.989232   11219 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:11.989236   11219 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:11.989239   11219 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:11.989243   11219 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:11.989246   11219 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:11.989249   11219 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:11.989255   11219 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:11.989258   11219 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:11.989261   11219 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:11.989264   11219 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:11.989267   11219 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:11.989270   11219 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:11.989275   11219 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:11.989278   11219 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:11.989282   11219 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:11.989285   11219 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:11.989288   11219 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:11.989292   11219 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:11.989295   11219 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:11.989298   11219 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:11.989301   11219 cri.go:89] found id: ""
	I1019 16:24:11.989358   11219 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:12.012162   11219 out.go:203] 
	W1019 16:24:12.016333   11219 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:12.016363   11219 out.go:285] * 
	* 
	W1019 16:24:12.020232   11219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:12.024248   11219 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9cg5f" [1bf2f400-3d19-4a95-b950-ac972f8c406b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004063919s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-567517 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-567517 addons disable yakd --alsologtostderr -v=1: exit status 11 (281.893532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:24:06.525077   11158 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:24:06.525236   11158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:06.525247   11158 out.go:374] Setting ErrFile to fd 2...
	I1019 16:24:06.525253   11158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:24:06.525517   11158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:24:06.525972   11158 mustload.go:66] Loading cluster: addons-567517
	I1019 16:24:06.526327   11158 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:06.526345   11158 addons.go:607] checking whether the cluster is paused
	I1019 16:24:06.526974   11158 config.go:182] Loaded profile config "addons-567517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:24:06.526995   11158 host.go:66] Checking if "addons-567517" exists ...
	I1019 16:24:06.527523   11158 cli_runner.go:164] Run: docker container inspect addons-567517 --format={{.State.Status}}
	I1019 16:24:06.544925   11158 ssh_runner.go:195] Run: systemctl --version
	I1019 16:24:06.544986   11158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-567517
	I1019 16:24:06.562048   11158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/addons-567517/id_rsa Username:docker}
	I1019 16:24:06.673587   11158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:24:06.673687   11158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:24:06.707205   11158 cri.go:89] found id: "12ea8dcf61f963d1ec2f18e269567ea3897589410601e7e76e658cab586e0dc1"
	I1019 16:24:06.707232   11158 cri.go:89] found id: "b3e64e8c305d363f0deaecb775b5b70515282d978b8f8b93902f737db853a120"
	I1019 16:24:06.707237   11158 cri.go:89] found id: "4303ea4e21d414763ec90861e83549689d375dbbe4a96ebba76dfd48ea1655d7"
	I1019 16:24:06.707241   11158 cri.go:89] found id: "82a85755a9b57fe570a5b20cff6b6f1fb98715a492a098c88b85c59576b4859d"
	I1019 16:24:06.707244   11158 cri.go:89] found id: "bbc0d449ae5d2ecc4301ed3f4f20963e74d7c35eee027e6fd5efc1925826dbea"
	I1019 16:24:06.707247   11158 cri.go:89] found id: "43da60e53772051a90bec332bb59d5aeb3672eb8f1e45dae331fa31ef8090de8"
	I1019 16:24:06.707251   11158 cri.go:89] found id: "1509a0b94cd4f836854e2fab6c35e53df658391426bb6c1e0647398276b5a67b"
	I1019 16:24:06.707255   11158 cri.go:89] found id: "d10be64e7256847c76cb85d9460d052ae3bb7bee7fc04a426e62bc3decf34e65"
	I1019 16:24:06.707259   11158 cri.go:89] found id: "eafe11c1243da451ebdb745572e5d5c58912bc402c5956383ec4b27d00399f9c"
	I1019 16:24:06.707265   11158 cri.go:89] found id: "305f495ac25ce0a4b16c7bc40e4cff29ab0f7cf1bff4c0dca0d365b332efc8e4"
	I1019 16:24:06.707269   11158 cri.go:89] found id: "40e54317c12f225aac20ca1be4f671470b4080c656e8a6db46e4ebb954526cec"
	I1019 16:24:06.707273   11158 cri.go:89] found id: "cd9dd5ae64c43fadae6daa60a7124ef15501e61a81656348f137a472bdadd2cb"
	I1019 16:24:06.707276   11158 cri.go:89] found id: "3e9d456c959c99d65f5195bcc9d0b85556b3359f9a28c957497c47a09c49ea65"
	I1019 16:24:06.707281   11158 cri.go:89] found id: "1871e774871464395b90f67357f38d13aa620f5844b569bccbea10c56a3194b8"
	I1019 16:24:06.707285   11158 cri.go:89] found id: "530194304d419c01dde7d88054be673774a4909d70847c35e369cbebc78e6b51"
	I1019 16:24:06.707290   11158 cri.go:89] found id: "42990e86d93f7a29f4de980716d409212c04ca8009bab7510fd054a57a827287"
	I1019 16:24:06.707297   11158 cri.go:89] found id: "48cf170685f6095f77d524f17ec79e2d9c95f2351a14761ee278fcccd026c783"
	I1019 16:24:06.707301   11158 cri.go:89] found id: "6e17fa2c1568b00adeb7a90142371c0451dccb9dbaa01e466c14cfe0f564e9cb"
	I1019 16:24:06.707304   11158 cri.go:89] found id: "d771336608d23cb80c921cf526b4c6bc18f6b1544cb6aeb3ac2ec63ee8d541f9"
	I1019 16:24:06.707307   11158 cri.go:89] found id: "16eba4f0809b0e85d9e4ea2a97f3c6cba2d16dd2e65dcd544acc758e53c827a6"
	I1019 16:24:06.707312   11158 cri.go:89] found id: "b0cb46d4903581043f0e99ec10bcaae299b5aec7a942f6f30debe5c2a4fe205b"
	I1019 16:24:06.707315   11158 cri.go:89] found id: "60b936e140fc23537883db8eb743ef95e9ba525bba465a475e9165d289f29a5f"
	I1019 16:24:06.707318   11158 cri.go:89] found id: "eecd76037af86e2cdbacaf2f544a17a7e03e2949c22f34afd5b0b7f5829f36f9"
	I1019 16:24:06.707324   11158 cri.go:89] found id: ""
	I1019 16:24:06.707377   11158 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 16:24:06.725759   11158 out.go:203] 
	W1019 16:24:06.732063   11158 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:24:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 16:24:06.732098   11158 out.go:285] * 
	* 
	W1019 16:24:06.735905   11158 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 16:24:06.740322   11158 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-567517 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-328874 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-328874 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-328874 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-328874 --alsologtostderr -v=1] stderr:
I1019 16:41:11.509830   30605 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:11.510056   30605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:11.510068   30605 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:11.510074   30605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:11.510320   30605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:11.510693   30605 mustload.go:66] Loading cluster: functional-328874
I1019 16:41:11.511098   30605 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:11.511559   30605 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:11.537523   30605 host.go:66] Checking if "functional-328874" exists ...
I1019 16:41:11.537831   30605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:41:11.628645   30605 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 16:41:11.618208363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1019 16:41:11.628851   30605 api_server.go:166] Checking apiserver status ...
I1019 16:41:11.628916   30605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1019 16:41:11.628958   30605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:11.656757   30605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:11.772700   30605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3983/cgroup
I1019 16:41:11.789776   30605 api_server.go:182] apiserver freezer: "12:freezer:/docker/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/crio/crio-2f25a2af6a9cc31bf15b450f4695b6ac691c23c31fd1be113cad3f103d1ec715"
I1019 16:41:11.789849   30605 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/crio/crio-2f25a2af6a9cc31bf15b450f4695b6ac691c23c31fd1be113cad3f103d1ec715/freezer.state
I1019 16:41:11.805297   30605 api_server.go:204] freezer state: "THAWED"
I1019 16:41:11.805323   30605 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1019 16:41:11.815690   30605 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1019 16:41:11.815724   30605 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1019 16:41:11.815926   30605 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:11.815934   30605 addons.go:70] Setting dashboard=true in profile "functional-328874"
I1019 16:41:11.815941   30605 addons.go:239] Setting addon dashboard=true in "functional-328874"
I1019 16:41:11.815966   30605 host.go:66] Checking if "functional-328874" exists ...
I1019 16:41:11.816386   30605 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:11.840439   30605 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1019 16:41:11.845254   30605 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1019 16:41:11.847961   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1019 16:41:11.847983   30605 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1019 16:41:11.848085   30605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:11.868140   30605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:11.991210   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1019 16:41:11.991232   30605 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1019 16:41:12.020258   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1019 16:41:12.020279   30605 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1019 16:41:12.036720   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1019 16:41:12.036744   30605 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1019 16:41:12.060975   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1019 16:41:12.060994   30605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1019 16:41:12.075603   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1019 16:41:12.075623   30605 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1019 16:41:12.088758   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1019 16:41:12.088778   30605 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1019 16:41:12.102333   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1019 16:41:12.102353   30605 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1019 16:41:12.115996   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1019 16:41:12.116016   30605 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1019 16:41:12.131753   30605 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:41:12.131793   30605 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1019 16:41:12.145092   30605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:41:13.171699   30605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.026566798s)
I1019 16:41:13.174819   30605 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-328874 addons enable metrics-server

                                                
                                                
I1019 16:41:13.177716   30605 addons.go:202] Writing out "functional-328874" config to set dashboard=true...
W1019 16:41:13.177989   30605 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1019 16:41:13.178639   30605 kapi.go:59] client config for functional-328874: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.key", CAFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21202b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1019 16:41:13.179194   30605 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1019 16:41:13.179214   30605 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1019 16:41:13.179226   30605 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1019 16:41:13.179234   30605 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1019 16:41:13.179239   30605 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1019 16:41:13.221990   30605 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  277b0569-78ce-4f09-bb61-a38797a02e50 1448 0 2025-10-19 16:41:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-19 16:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.154.40,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.154.40],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1019 16:41:13.222147   30605 out.go:285] * Launching proxy ...
* Launching proxy ...
I1019 16:41:13.222219   30605 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-328874 proxy --port 36195]
I1019 16:41:13.222510   30605 dashboard.go:159] Waiting for kubectl to output host:port ...
I1019 16:41:13.324735   30605 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1019 16:41:13.324786   30605 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1019 16:41:13.343414   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d19c43f-9bb7-4df7-a379-ea2a5f0b61d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40016057c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b28c0 TLS:<nil>}
I1019 16:41:13.343488   30605 retry.go:31] will retry after 108.755µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.347951   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7dc7413-6070-4387-83a6-49e063217784] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2c80 TLS:<nil>}
I1019 16:41:13.348033   30605 retry.go:31] will retry after 93.024µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.357769   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[76a94aaa-24cd-476f-8ff7-d4d6557f4f98] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40016058c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2dc0 TLS:<nil>}
I1019 16:41:13.357832   30605 retry.go:31] will retry after 242.081µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.362131   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f13a925c-7fe9-4390-b7a8-069984958c41] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2f00 TLS:<nil>}
I1019 16:41:13.362191   30605 retry.go:31] will retry after 329.844µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.366246   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e5fee41-da49-4502-b623-3992a7614615] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000486dc0 TLS:<nil>}
I1019 16:41:13.366303   30605 retry.go:31] will retry after 675.153µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.369970   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bd54dad-d779-4227-a4f9-636472c10061] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000486f00 TLS:<nil>}
I1019 16:41:13.370030   30605 retry.go:31] will retry after 730.077µs: Temporary Error: unexpected response code: 503
I1019 16:41:13.376051   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aff09a12-4626-4785-afee-385464545a08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487040 TLS:<nil>}
I1019 16:41:13.376113   30605 retry.go:31] will retry after 1.125081ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.384900   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6b76b33a-1119-47bc-ad00-b6934ab76aee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3040 TLS:<nil>}
I1019 16:41:13.384964   30605 retry.go:31] will retry after 1.422633ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.390094   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5908ba4a-c7f7-4606-8455-93cf817fbcde] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3180 TLS:<nil>}
I1019 16:41:13.390155   30605 retry.go:31] will retry after 1.449812ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.396931   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[241713f8-0c7e-4a25-98da-dbbe335e16ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b32c0 TLS:<nil>}
I1019 16:41:13.397005   30605 retry.go:31] will retry after 5.237317ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.406280   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4e4d922d-e0b0-45a1-9346-e6c4c0495f1b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001605d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3400 TLS:<nil>}
I1019 16:41:13.406355   30605 retry.go:31] will retry after 3.668304ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.413527   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79b6b573-d876-4a8c-8e30-a27a7be33bc2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487180 TLS:<nil>}
I1019 16:41:13.413587   30605 retry.go:31] will retry after 7.692613ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.425592   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[949f4c70-cb88-4712-8791-28123fcd1fce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004872c0 TLS:<nil>}
I1019 16:41:13.425653   30605 retry.go:31] will retry after 8.528413ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.437919   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a61a1d6-82d6-490d-80d3-3e366435a51f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487400 TLS:<nil>}
I1019 16:41:13.437996   30605 retry.go:31] will retry after 25.924634ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.468543   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fc27bc8e-d033-4929-bb32-2efddbc85045] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487540 TLS:<nil>}
I1019 16:41:13.468657   30605 retry.go:31] will retry after 37.457458ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.510146   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3690e423-db7a-41b6-9cb4-086a7d554249] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001632000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3540 TLS:<nil>}
I1019 16:41:13.510221   30605 retry.go:31] will retry after 43.908836ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.558101   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07a30725-3212-4545-ab18-a29c498ee5fa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487900 TLS:<nil>}
I1019 16:41:13.558161   30605 retry.go:31] will retry after 44.690955ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.606358   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b07a1ff8-cb99-4713-975b-1e5cb71bb2d1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001632100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3680 TLS:<nil>}
I1019 16:41:13.606488   30605 retry.go:31] will retry after 117.852706ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.728010   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1e7a4c4d-a2c4-4faa-9dd8-9c39703937a9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x40014f4c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487a40 TLS:<nil>}
I1019 16:41:13.728103   30605 retry.go:31] will retry after 143.774441ms: Temporary Error: unexpected response code: 503
I1019 16:41:13.875908   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6805b330-7902-4d22-b522-f2e0669700b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:13 GMT]] Body:0x4001632200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3900 TLS:<nil>}
I1019 16:41:13.875985   30605 retry.go:31] will retry after 270.557694ms: Temporary Error: unexpected response code: 503
I1019 16:41:14.151007   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba3e2121-3551-4a40-bcd3-cd88b7befd91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:14 GMT]] Body:0x40014f4d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487b80 TLS:<nil>}
I1019 16:41:14.151179   30605 retry.go:31] will retry after 378.985397ms: Temporary Error: unexpected response code: 503
I1019 16:41:14.533863   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0cfe5e4-28ee-4ef7-93c3-c56a80637e62] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:14 GMT]] Body:0x4001632300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3b80 TLS:<nil>}
I1019 16:41:14.533934   30605 retry.go:31] will retry after 408.773686ms: Temporary Error: unexpected response code: 503
I1019 16:41:14.946369   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2c7fdea-9fa1-48dd-a704-7572303d737c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:14 GMT]] Body:0x40016323c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000487cc0 TLS:<nil>}
I1019 16:41:14.946431   30605 retry.go:31] will retry after 552.851047ms: Temporary Error: unexpected response code: 503
I1019 16:41:15.502613   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a9ab264-1309-465d-9ae0-cbd0e3697dec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:15 GMT]] Body:0x40014f4ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c000 TLS:<nil>}
I1019 16:41:15.502675   30605 retry.go:31] will retry after 944.841593ms: Temporary Error: unexpected response code: 503
I1019 16:41:16.451170   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2412fd8-cdcc-42d3-b80f-7c8a02315d54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:16 GMT]] Body:0x40014f4f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c140 TLS:<nil>}
I1019 16:41:16.451235   30605 retry.go:31] will retry after 1.752155228s: Temporary Error: unexpected response code: 503
I1019 16:41:18.207348   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb937beb-0608-42f1-ba9c-80dc30596248] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:18 GMT]] Body:0x4001632540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b3cc0 TLS:<nil>}
I1019 16:41:18.207400   30605 retry.go:31] will retry after 2.40020411s: Temporary Error: unexpected response code: 503
I1019 16:41:20.612491   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21724ac9-02f4-4d0e-bf4a-70f33ef6f1d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:20 GMT]] Body:0x40016325c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0000 TLS:<nil>}
I1019 16:41:20.612550   30605 retry.go:31] will retry after 3.307715493s: Temporary Error: unexpected response code: 503
I1019 16:41:23.925226   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4f75a10-499f-4a67-a6c7-bea2ac173498] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:23 GMT]] Body:0x40014f50c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c280 TLS:<nil>}
I1019 16:41:23.925310   30605 retry.go:31] will retry after 3.387787722s: Temporary Error: unexpected response code: 503
I1019 16:41:27.316380   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f714babb-bcf6-4bf7-8b7f-1729d8af4582] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:27 GMT]] Body:0x40014f5140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c3c0 TLS:<nil>}
I1019 16:41:27.316458   30605 retry.go:31] will retry after 5.52672143s: Temporary Error: unexpected response code: 503
I1019 16:41:32.846293   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10627b70-06da-4876-b291-adafedb7ee42] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:32 GMT]] Body:0x4001632780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c500 TLS:<nil>}
I1019 16:41:32.846352   30605 retry.go:31] will retry after 12.98869829s: Temporary Error: unexpected response code: 503
I1019 16:41:45.840743   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f98073e-1534-4536-974d-c9ebdc2f7296] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:45 GMT]] Body:0x4001632800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0140 TLS:<nil>}
I1019 16:41:45.840816   30605 retry.go:31] will retry after 12.272113589s: Temporary Error: unexpected response code: 503
I1019 16:41:58.116369   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20521f60-a636-4ef0-a43e-56de2a056b08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:41:58 GMT]] Body:0x40016328c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c640 TLS:<nil>}
I1019 16:41:58.116439   30605 retry.go:31] will retry after 27.468200584s: Temporary Error: unexpected response code: 503
I1019 16:42:25.589591   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28518770-c38a-4d96-b95a-30717ef1274e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:42:25 GMT]] Body:0x4001632940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0280 TLS:<nil>}
I1019 16:42:25.589660   30605 retry.go:31] will retry after 28.967551186s: Temporary Error: unexpected response code: 503
I1019 16:42:54.560582   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18a5fb2d-9b17-4c6b-b61b-810e8a265834] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:42:54 GMT]] Body:0x40014f5340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c8c0 TLS:<nil>}
I1019 16:42:54.560646   30605 retry.go:31] will retry after 48.044155097s: Temporary Error: unexpected response code: 503
I1019 16:43:42.607850   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e09e9ff-3798-4ce7-ba37-58c1d34cecfc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:43:42 GMT]] Body:0x40014f4080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d03c0 TLS:<nil>}
I1019 16:43:42.607916   30605 retry.go:31] will retry after 30.228163711s: Temporary Error: unexpected response code: 503
I1019 16:44:12.840418   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0e2051d-6cac-436f-820d-be732296aa5d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:44:12 GMT]] Body:0x4001632100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027ca00 TLS:<nil>}
I1019 16:44:12.840484   30605 retry.go:31] will retry after 1m26.063349652s: Temporary Error: unexpected response code: 503
I1019 16:45:38.908650   30605 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a44ab538-8b55-492b-871d-69e01de17633] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:45:38 GMT]] Body:0x40014f4080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0500 TLS:<nil>}
I1019 16:45:38.908718   30605 retry.go:31] will retry after 55.065863287s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-328874
helpers_test.go:243: (dbg) docker inspect functional-328874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d",
	        "Created": "2025-10-19T16:28:13.729951307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19819,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:28:13.791812897Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/hosts",
	        "LogPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d-json.log",
	        "Name": "/functional-328874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-328874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-328874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d",
	                "LowerDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-328874",
	                "Source": "/var/lib/docker/volumes/functional-328874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-328874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-328874",
	                "name.minikube.sigs.k8s.io": "functional-328874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63c2e7e4d28408d781b7724852b5b01955a1749d2feb6c063a7d2b19f26b2331",
	            "SandboxKey": "/var/run/docker/netns/63c2e7e4d284",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-328874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:4a:0e:80:5a:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "600ec8e9409c4b1d1b152089c6647bbccf98cddbf5d30c37188777771b635dc6",
	                    "EndpointID": "783860c0a458e4f043008b0606242fa9fe42a3513ad595527519d5052efb0a65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-328874",
	                        "53040687d3af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-328874 -n functional-328874
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 logs -n 25: (1.487451933s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-328874 image ls                                                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image load --daemon kicbase/echo-server:functional-328874 --alsologtostderr                                                             │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls                                                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image save kicbase/echo-server:functional-328874 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image rm kicbase/echo-server:functional-328874 --alsologtostderr                                                                        │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls                                                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image save --daemon kicbase/echo-server:functional-328874 --alsologtostderr                                                             │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /etc/test/nested/copy/4111/hosts                                                                                           │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /etc/ssl/certs/4111.pem                                                                                                    │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /usr/share/ca-certificates/4111.pem                                                                                        │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /etc/ssl/certs/41112.pem                                                                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /usr/share/ca-certificates/41112.pem                                                                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls --format short --alsologtostderr                                                                                               │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls --format yaml --alsologtostderr                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ ssh            │ functional-328874 ssh pgrep buildkitd                                                                                                                     │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │                     │
	│ image          │ functional-328874 image build -t localhost/my-image:functional-328874 testdata/build --alsologtostderr                                                    │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls                                                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls --format json --alsologtostderr                                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ image          │ functional-328874 image ls --format table --alsologtostderr                                                                                               │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ update-context │ functional-328874 update-context --alsologtostderr -v=2                                                                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ update-context │ functional-328874 update-context --alsologtostderr -v=2                                                                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	│ update-context │ functional-328874 update-context --alsologtostderr -v=2                                                                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:41 UTC │ 19 Oct 25 16:41 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:41:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:41:11.119603   30504 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:41:11.119853   30504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:11.119894   30504 out.go:374] Setting ErrFile to fd 2...
	I1019 16:41:11.119916   30504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:11.120359   30504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:41:11.120970   30504 out.go:368] Setting JSON to false
	I1019 16:41:11.122227   30504 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1419,"bootTime":1760890652,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:41:11.122382   30504 start.go:143] virtualization:  
	I1019 16:41:11.126622   30504 out.go:179] * [functional-328874] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:41:11.129957   30504 notify.go:221] Checking for updates...
	I1019 16:41:11.130774   30504 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:41:11.134349   30504 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:41:11.137655   30504 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:41:11.140772   30504 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:41:11.143840   30504 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:41:11.146771   30504 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:41:11.150095   30504 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:41:11.150917   30504 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:41:11.199803   30504 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:41:11.199907   30504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:41:11.285852   30504 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 16:41:11.27363293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:41:11.285958   30504 docker.go:319] overlay module found
	I1019 16:41:11.289264   30504 out.go:179] * Using the docker driver based on existing profile
	I1019 16:41:11.293016   30504 start.go:309] selected driver: docker
	I1019 16:41:11.293042   30504 start.go:930] validating driver "docker" against &{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:41:11.293993   30504 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:41:11.294133   30504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:41:11.428667   30504 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 16:41:11.418422753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:41:11.429089   30504 cni.go:84] Creating CNI manager for ""
	I1019 16:41:11.429154   30504 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:41:11.429202   30504 start.go:353] cluster config:
	{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:41:11.432420   30504 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 19 16:41:18 functional-328874 crio[3570]: time="2025-10-19T16:41:18.808488406Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-328874 found" id=a0b1dba0-9743-4670-9678-147b391e883c name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:18 functional-328874 crio[3570]: time="2025-10-19T16:41:18.832300925Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-328874" id=42e9b5ef-22a5-49a4-87b3-297fe23496a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:18 functional-328874 crio[3570]: time="2025-10-19T16:41:18.832428803Z" level=info msg="Image localhost/kicbase/echo-server:functional-328874 not found" id=42e9b5ef-22a5-49a4-87b3-297fe23496a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:18 functional-328874 crio[3570]: time="2025-10-19T16:41:18.832470888Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-328874 found" id=42e9b5ef-22a5-49a4-87b3-297fe23496a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.704667519Z" level=info msg="Checking image status: kicbase/echo-server:functional-328874" id=4271dba5-d213-4cf1-b26c-b337688ef40c name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.731140695Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-328874" id=5bc4dd02-5723-471a-b7be-f8f4857f2e24 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.731301213Z" level=info msg="Image docker.io/kicbase/echo-server:functional-328874 not found" id=5bc4dd02-5723-471a-b7be-f8f4857f2e24 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.731345267Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-328874 found" id=5bc4dd02-5723-471a-b7be-f8f4857f2e24 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.757484638Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-328874" id=031f4e37-cbe6-4875-bb84-1b681a84e7ac name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.757672866Z" level=info msg="Image localhost/kicbase/echo-server:functional-328874 not found" id=031f4e37-cbe6-4875-bb84-1b681a84e7ac name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:19 functional-328874 crio[3570]: time="2025-10-19T16:41:19.75771358Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-328874 found" id=031f4e37-cbe6-4875-bb84-1b681a84e7ac name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.897802458Z" level=info msg="Checking image status: kicbase/echo-server:functional-328874" id=9bd8787a-0f16-404e-9b5c-ac9007559713 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.922492885Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-328874" id=42bbecf2-cea5-4285-bda1-0749f4534d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.922656669Z" level=info msg="Image docker.io/kicbase/echo-server:functional-328874 not found" id=42bbecf2-cea5-4285-bda1-0749f4534d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.922697588Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-328874 found" id=42bbecf2-cea5-4285-bda1-0749f4534d17 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.949925094Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-328874" id=181b64e8-af09-4c1d-9011-0ec44c81788b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.950060817Z" level=info msg="Image localhost/kicbase/echo-server:functional-328874 not found" id=181b64e8-af09-4c1d-9011-0ec44c81788b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:20 functional-328874 crio[3570]: time="2025-10-19T16:41:20.950268967Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-328874 found" id=181b64e8-af09-4c1d-9011-0ec44c81788b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.730855169Z" level=info msg="Checking image status: kicbase/echo-server:functional-328874" id=443fe155-0e7c-44f8-bab7-461c00389695 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.756152626Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-328874" id=ea29bb75-0318-49d6-abdb-9b1d89a52112 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.756315991Z" level=info msg="Image docker.io/kicbase/echo-server:functional-328874 not found" id=ea29bb75-0318-49d6-abdb-9b1d89a52112 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.756356517Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-328874 found" id=ea29bb75-0318-49d6-abdb-9b1d89a52112 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.781872463Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-328874" id=e6933c39-cc86-4261-9b51-0af391010f37 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.782019098Z" level=info msg="Image localhost/kicbase/echo-server:functional-328874 not found" id=e6933c39-cc86-4261-9b51-0af391010f37 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 16:41:21 functional-328874 crio[3570]: time="2025-10-19T16:41:21.782056391Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-328874 found" id=e6933c39-cc86-4261-9b51-0af391010f37 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d6595dd9b5f4e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   5 minutes ago       Exited              mount-munger              0                   225f25534225d       busybox-mount                               default
	4fc5468d412ab       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a       15 minutes ago      Running             myfrontend                0                   8e4095d71384a       sp-pod                                      default
	626b8b71bedcf       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0       15 minutes ago      Running             nginx                     0                   429240aef69b7       nginx-svc                                   default
	2a30fa624bf4d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      15 minutes ago      Running             kindnet-cni               2                   c1380cde84642       kindnet-rnknf                               kube-system
	c2db568fd48f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 minutes ago      Running             coredns                   2                   639dc60f92ddc       coredns-66bc5c9577-hxbk8                    kube-system
	c17f1c2b86d1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 minutes ago      Running             storage-provisioner       3                   496cfd542f3db       storage-provisioner                         kube-system
	9b3bfe835e794       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      15 minutes ago      Running             kube-proxy                2                   a5d4684171aca       kube-proxy-7lgrr                            kube-system
	2f25a2af6a9cc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      15 minutes ago      Running             kube-apiserver            0                   c395cadc0aea3       kube-apiserver-functional-328874            kube-system
	42d7f470581bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      15 minutes ago      Running             kube-controller-manager   2                   6f315785f1a8f       kube-controller-manager-functional-328874   kube-system
	7126285a58c3e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      15 minutes ago      Running             kube-scheduler            2                   27b74a80b5cb4       kube-scheduler-functional-328874            kube-system
	d3902c36a60d7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      15 minutes ago      Running             etcd                      2                   fb6c70b27564a       etcd-functional-328874                      kube-system
	05e3c65bac33f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 minutes ago      Created             storage-provisioner       2                   496cfd542f3db       storage-provisioner                         kube-system
	1ea61893b4a7a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      16 minutes ago      Exited              kindnet-cni               1                   c1380cde84642       kindnet-rnknf                               kube-system
	d6dd353e9326f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      16 minutes ago      Exited              etcd                      1                   fb6c70b27564a       etcd-functional-328874                      kube-system
	54f8743c36bea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      16 minutes ago      Exited              kube-controller-manager   1                   6f315785f1a8f       kube-controller-manager-functional-328874   kube-system
	5884904d4730f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      16 minutes ago      Exited              kube-scheduler            1                   27b74a80b5cb4       kube-scheduler-functional-328874            kube-system
	ecffc18cc92ec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      16 minutes ago      Exited              coredns                   1                   639dc60f92ddc       coredns-66bc5c9577-hxbk8                    kube-system
	7a11f6e21bfb8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      16 minutes ago      Exited              kube-proxy                1                   a5d4684171aca       kube-proxy-7lgrr                            kube-system
	
	
	==> coredns [c2db568fd48f78c46674f1afb57aa1e8b987b73039cd0ed425d5b96ac2f24234] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42576 - 29638 "HINFO IN 2105635577704700243.8786815664405587714. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023560902s
	
	
	==> coredns [ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54572 - 50715 "HINFO IN 5367120530847457092.502566715447863678. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01206511s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-328874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-328874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-328874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_28_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:28:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-328874
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:46:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:45:28 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:45:28 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:45:28 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:45:28 +0000   Sun, 19 Oct 2025 16:29:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-328874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                8811ae4d-9696-4db7-b42c-b5c677cbe300
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bxlwb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-wmhbr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-hxbk8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-328874                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-rnknf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-328874              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-328874     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-7lgrr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-328874              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-fc7rz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fcpdg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-328874 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-328874 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-328874 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	  Normal   NodeReady                16m                kubelet          Node functional-328874 status is now: NodeReady
	  Normal   RegisteredNode           16m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-328874 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-328874 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)  kubelet          Node functional-328874 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	
	
	==> dmesg <==
	[Oct19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014509] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499579] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033288] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.729802] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.182201] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 16:21] overlayfs: idmapped layers are currently not supported
	[  +0.059278] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct19 16:27] overlayfs: idmapped layers are currently not supported
	[Oct19 16:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d3902c36a60d7007bf118a856812b41bbd3adb3c5931d84ce26f27391ceb386d] <==
	{"level":"warn","ts":"2025-10-19T16:30:20.772794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.804826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.826334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.849675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.883039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.910489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.945193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.968656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.052918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.085989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.112337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.143300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.176094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.204522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.236658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.260838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.287335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.317017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.377740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:40:19.434664Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2025-10-19T16:40:19.458671Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1077,"took":"23.602786ms","hash":4239443231,"current-db-size-bytes":3100672,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1314816,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-19T16:40:19.458720Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4239443231,"revision":1077,"compact-revision":-1}
	{"level":"info","ts":"2025-10-19T16:45:19.441602Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1336}
	{"level":"info","ts":"2025-10-19T16:45:19.445325Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1336,"took":"3.38982ms","hash":1597553018,"current-db-size-bytes":3100672,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1949696,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-19T16:45:19.445380Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1597553018,"revision":1336,"compact-revision":1077}
	
	
	==> etcd [d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48] <==
	{"level":"warn","ts":"2025-10-19T16:29:45.995271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.012926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.038028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.069418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.087178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.100108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.186596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:29:59.121479Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:29:59.121580Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-328874","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:29:59.121706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:29:59.266108Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:29:59.266214Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.266257Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-19T16:29:59.266296Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:29:59.266360Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266469Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:29:59.266477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266599Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266655Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:29:59.266686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.270281Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:29:59.270356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.270387Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:29:59.270394Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-328874","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:46:12 up 28 min,  0 user,  load average: 0.14, 0.34, 0.58
	Linux functional-328874 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6] <==
	I1019 16:29:44.133047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:29:44.133391       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:29:44.133568       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:29:44.133618       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:29:44.133656       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:29:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:29:44.315685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:29:44.315768       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:29:44.318986       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:29:44.319876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 16:29:47.421198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:29:47.421292       1 metrics.go:72] Registering metrics
	I1019 16:29:47.421369       1 controller.go:711] "Syncing nftables rules"
	I1019 16:29:54.315423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:29:54.315488       1 main.go:301] handling current node
	
	
	==> kindnet [2a30fa624bf4dd2f33d842820173a161c4875220f5d27219cbbda9f9e0587543] <==
	I1019 16:44:03.701203       1 main.go:301] handling current node
	I1019 16:44:13.708254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:13.708289       1 main.go:301] handling current node
	I1019 16:44:23.706770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:23.706810       1 main.go:301] handling current node
	I1019 16:44:33.700046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:33.700455       1 main.go:301] handling current node
	I1019 16:44:43.700116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:43.700151       1 main.go:301] handling current node
	I1019 16:44:53.701070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:44:53.701178       1 main.go:301] handling current node
	I1019 16:45:03.700139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:03.700253       1 main.go:301] handling current node
	I1019 16:45:13.708004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:13.708104       1 main.go:301] handling current node
	I1019 16:45:23.703212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:23.703326       1 main.go:301] handling current node
	I1019 16:45:33.701995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:33.702036       1 main.go:301] handling current node
	I1019 16:45:43.708299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:43.708333       1 main.go:301] handling current node
	I1019 16:45:53.703021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:45:53.703061       1 main.go:301] handling current node
	I1019 16:46:03.700034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:46:03.700070       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f25a2af6a9cc31bf15b450f4695b6ac691c23c31fd1be113cad3f103d1ec715] <==
	I1019 16:30:22.183913       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 16:30:22.184296       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 16:30:22.184313       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 16:30:22.185972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:30:22.225863       1 cache.go:39] Caches are synced for autoregister controller
	I1019 16:30:22.233114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 16:30:22.263348       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:30:22.971820       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 16:30:23.041786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:30:24.033046       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:30:24.231432       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:30:24.346306       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:30:24.358982       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:30:36.937259       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.219.36"}
	I1019 16:30:36.956286       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:30:36.956762       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:30:43.684956       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.50.63"}
	I1019 16:30:53.210491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:30:53.378974       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.223.151"}
	E1019 16:31:00.523965       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:46988: use of closed network connection
	I1019 16:31:07.811454       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.2.20"}
	I1019 16:40:22.148035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 16:41:12.713509       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:41:13.131048       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.154.40"}
	I1019 16:41:13.164242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.153.57"}
	
	
	==> kube-controller-manager [42d7f470581bc38cb1d34a75915181d476073df2904b73faaf6f95394f0ce878] <==
	I1019 16:30:25.606877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:30:25.606911       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 16:30:25.606942       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 16:30:25.607044       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 16:30:25.607151       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 16:30:25.607259       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-328874"
	I1019 16:30:25.607327       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 16:30:25.612391       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:30:25.612491       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:30:25.616005       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 16:30:25.616095       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:30:25.617143       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:30:25.617362       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:30:25.618605       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 16:30:25.618665       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:30:25.625427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:30:25.651745       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1019 16:41:12.864080       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.902755       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.920394       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.921436       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.944479       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.945033       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.961271       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:41:12.961386       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53] <==
	I1019 16:29:50.483150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 16:29:50.481279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:29:50.483834       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:29:50.486161       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 16:29:50.487722       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:29:50.489925       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:29:50.495116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:29:50.498677       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:29:50.498701       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 16:29:50.498743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 16:29:50.502676       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:29:50.509303       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:29:50.512624       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:29:50.524477       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 16:29:50.524518       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:29:50.524700       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 16:29:50.524969       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:29:50.525116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:29:50.534315       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:29:50.538482       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:29:50.540732       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:29:50.543571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 16:29:50.545342       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:29:50.566688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:29:50.577130       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390] <==
	I1019 16:29:43.726781       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:29:43.832211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1019 16:29:43.833032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-328874&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:47.370211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:47.370246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:47.370378       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:47.494700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:47.494772       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:47.573723       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:47.582950       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:47.582986       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:47.584116       1 config.go:200] "Starting service config controller"
	I1019 16:29:47.584134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:47.585219       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:47.585228       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:47.585252       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:47.585256       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:47.585643       1 config.go:309] "Starting node config controller"
	I1019 16:29:47.585661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:47.585668       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:47.686593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:47.686694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:47.686708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [9b3bfe835e794840778b4b449dee89f256dd16543b8f0625d4f050d27633df26] <==
	I1019 16:30:23.466762       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:30:23.574627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:30:23.675182       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:30:23.675226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:30:23.675298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:30:23.795638       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:30:23.795702       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:30:23.814710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:30:23.815138       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:30:23.815172       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:30:23.816585       1 config.go:200] "Starting service config controller"
	I1019 16:30:23.816610       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:30:23.823612       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:30:23.823632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:30:23.823650       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:30:23.823654       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:30:23.824102       1 config.go:309] "Starting node config controller"
	I1019 16:30:23.824110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:30:23.824116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:30:23.917042       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:30:23.925317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:30:23.925359       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339] <==
	I1019 16:29:45.988661       1 serving.go:386] Generated self-signed cert in-memory
	I1019 16:29:48.013877       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:29:48.013921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:48.031708       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:29:48.034084       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 16:29:48.034186       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 16:29:48.034260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:29:48.035096       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.035172       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.035587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:48.035649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:48.134357       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 16:29:48.135727       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.135863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:59.114849       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:29:59.114872       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:29:59.114891       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:29:59.114918       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:59.114936       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:59.114956       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1019 16:29:59.115196       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:29:59.115253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7126285a58c3e9b3745939dc652ca88f66128307d352144a3e23f7ed2758776a] <==
	I1019 16:30:20.102920       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:30:22.022981       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:30:22.023094       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:30:22.023136       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:30:22.029075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:30:22.111338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:30:22.115869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:30:22.118489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:30:22.122842       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:30:22.124435       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:30:22.122886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:30:22.227515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:40:38 functional-328874 kubelet[3897]: E1019 16:40:38.991729    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:47 functional-328874 kubelet[3897]: E1019 16:40:47.992070    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:40:52 functional-328874 kubelet[3897]: E1019 16:40:52.991367    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:59 functional-328874 kubelet[3897]: I1019 16:40:59.909056    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzn28\" (UniqueName: \"kubernetes.io/projected/dea57983-c92f-4971-90bb-701e41fcbf33-kube-api-access-tzn28\") pod \"busybox-mount\" (UID: \"dea57983-c92f-4971-90bb-701e41fcbf33\") " pod="default/busybox-mount"
	Oct 19 16:40:59 functional-328874 kubelet[3897]: I1019 16:40:59.909190    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dea57983-c92f-4971-90bb-701e41fcbf33-test-volume\") pod \"busybox-mount\" (UID: \"dea57983-c92f-4971-90bb-701e41fcbf33\") " pod="default/busybox-mount"
	Oct 19 16:41:00 functional-328874 kubelet[3897]: W1019 16:41:00.277545    3897 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/crio-225f25534225d456d95c773d8f812424366f77855316f8a8f1c1ceb1da76f1fc WatchSource:0}: Error finding container 225f25534225d456d95c773d8f812424366f77855316f8a8f1c1ceb1da76f1fc: Status 404 returned error can't find the container with id 225f25534225d456d95c773d8f812424366f77855316f8a8f1c1ceb1da76f1fc
	Oct 19 16:41:00 functional-328874 kubelet[3897]: E1019 16:41:00.990762    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:41:03 functional-328874 kubelet[3897]: I1019 16:41:03.938345    3897 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dea57983-c92f-4971-90bb-701e41fcbf33-test-volume\") pod \"dea57983-c92f-4971-90bb-701e41fcbf33\" (UID: \"dea57983-c92f-4971-90bb-701e41fcbf33\") "
	Oct 19 16:41:03 functional-328874 kubelet[3897]: I1019 16:41:03.938464    3897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea57983-c92f-4971-90bb-701e41fcbf33-test-volume" (OuterVolumeSpecName: "test-volume") pod "dea57983-c92f-4971-90bb-701e41fcbf33" (UID: "dea57983-c92f-4971-90bb-701e41fcbf33"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 19 16:41:03 functional-328874 kubelet[3897]: I1019 16:41:03.938971    3897 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzn28\" (UniqueName: \"kubernetes.io/projected/dea57983-c92f-4971-90bb-701e41fcbf33-kube-api-access-tzn28\") pod \"dea57983-c92f-4971-90bb-701e41fcbf33\" (UID: \"dea57983-c92f-4971-90bb-701e41fcbf33\") "
	Oct 19 16:41:03 functional-328874 kubelet[3897]: I1019 16:41:03.939156    3897 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dea57983-c92f-4971-90bb-701e41fcbf33-test-volume\") on node \"functional-328874\" DevicePath \"\""
	Oct 19 16:41:03 functional-328874 kubelet[3897]: I1019 16:41:03.943444    3897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea57983-c92f-4971-90bb-701e41fcbf33-kube-api-access-tzn28" (OuterVolumeSpecName: "kube-api-access-tzn28") pod "dea57983-c92f-4971-90bb-701e41fcbf33" (UID: "dea57983-c92f-4971-90bb-701e41fcbf33"). InnerVolumeSpecName "kube-api-access-tzn28". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 16:41:04 functional-328874 kubelet[3897]: I1019 16:41:04.040358    3897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzn28\" (UniqueName: \"kubernetes.io/projected/dea57983-c92f-4971-90bb-701e41fcbf33-kube-api-access-tzn28\") on node \"functional-328874\" DevicePath \"\""
	Oct 19 16:41:04 functional-328874 kubelet[3897]: I1019 16:41:04.796152    3897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225f25534225d456d95c773d8f812424366f77855316f8a8f1c1ceb1da76f1fc"
	Oct 19 16:41:05 functional-328874 kubelet[3897]: E1019 16:41:05.991982    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:41:11 functional-328874 kubelet[3897]: E1019 16:41:11.991556    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:41:13 functional-328874 kubelet[3897]: I1019 16:41:13.112631    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/086eeed4-c432-4767-984e-c166609ec767-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fcpdg\" (UID: \"086eeed4-c432-4767-984e-c166609ec767\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fcpdg"
	Oct 19 16:41:13 functional-328874 kubelet[3897]: I1019 16:41:13.112695    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/639a761d-4652-47b3-b4b6-29dc2af93d94-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-fc7rz\" (UID: \"639a761d-4652-47b3-b4b6-29dc2af93d94\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fc7rz"
	Oct 19 16:41:13 functional-328874 kubelet[3897]: I1019 16:41:13.112821    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgsb\" (UniqueName: \"kubernetes.io/projected/639a761d-4652-47b3-b4b6-29dc2af93d94-kube-api-access-mwgsb\") pod \"dashboard-metrics-scraper-77bf4d6c4c-fc7rz\" (UID: \"639a761d-4652-47b3-b4b6-29dc2af93d94\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fc7rz"
	Oct 19 16:41:13 functional-328874 kubelet[3897]: I1019 16:41:13.112851    3897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vkff\" (UniqueName: \"kubernetes.io/projected/086eeed4-c432-4767-984e-c166609ec767-kube-api-access-4vkff\") pod \"kubernetes-dashboard-855c9754f9-fcpdg\" (UID: \"086eeed4-c432-4767-984e-c166609ec767\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fcpdg"
	Oct 19 16:41:13 functional-328874 kubelet[3897]: W1019 16:41:13.352518    3897 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/crio-42bdae47508e1edfe3ca4e3adc713f401ef5d21be33ee1a153d3f7c1bca73b1e WatchSource:0}: Error finding container 42bdae47508e1edfe3ca4e3adc713f401ef5d21be33ee1a153d3f7c1bca73b1e: Status 404 returned error can't find the container with id 42bdae47508e1edfe3ca4e3adc713f401ef5d21be33ee1a153d3f7c1bca73b1e
	Oct 19 16:41:13 functional-328874 kubelet[3897]: W1019 16:41:13.435122    3897 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/crio-fff929147e95810a9abf8adbe96680c516439e929d5ed701160e8464cb107a31 WatchSource:0}: Error finding container fff929147e95810a9abf8adbe96680c516439e929d5ed701160e8464cb107a31: Status 404 returned error can't find the container with id fff929147e95810a9abf8adbe96680c516439e929d5ed701160e8464cb107a31
	Oct 19 16:41:19 functional-328874 kubelet[3897]: E1019 16:41:19.991457    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:41:22 functional-328874 kubelet[3897]: E1019 16:41:22.990901    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:41:37 functional-328874 kubelet[3897]: E1019 16:41:37.991895    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	
	
	==> storage-provisioner [05e3c65bac33f3e2ebc1b8f61739cbba1364ac75b1225099662a573a8cb1277d] <==
	
	
	==> storage-provisioner [c17f1c2b86d1b91fd61e5d85131b86025c816acf50368a3e86a34605f593c0f2] <==
	W1019 16:45:49.244928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:51.248045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:51.253029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:53.256828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:53.263484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:55.266090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:55.270655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:57.273511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:57.280577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:59.283673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:45:59.288040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:01.291146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:01.296167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:03.299833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:03.304492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:05.307630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:05.312178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:07.314763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:07.321463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:09.324561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:09.329323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:11.332916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:11.337788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:13.341152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:46:13.345616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-328874 -n functional-328874
helpers_test.go:269: (dbg) Run:  kubectl --context functional-328874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr dashboard-metrics-scraper-77bf4d6c4c-fc7rz kubernetes-dashboard-855c9754f9-fcpdg
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-328874 describe pod busybox-mount hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr dashboard-metrics-scraper-77bf4d6c4c-fc7rz kubernetes-dashboard-855c9754f9-fcpdg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-328874 describe pod busybox-mount hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr dashboard-metrics-scraper-77bf4d6c4c-fc7rz kubernetes-dashboard-855c9754f9-fcpdg: exit status 1 (121.641439ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-328874/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:40:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d6595dd9b5f4e51c968eb2bdc8b7764a07894f88752259b0903ad99b59e6a8e5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:41:02 +0000
	      Finished:     Sun, 19 Oct 2025 16:41:02 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tzn28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tzn28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-328874
	  Normal  Pulling    5m14s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m12s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.006s (2.006s including waiting). Image size: 3774172 bytes.
	  Normal  Created    5m12s  kubelet            Created container: mount-munger
	  Normal  Started    5m12s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bxlwb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-328874/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:31:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bzlqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bzlqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bxlwb to functional-328874
	  Normal   Pulling    12m (x5 over 15m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     12m (x5 over 15m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    5m3s (x43 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m3s (x43 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-wmhbr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-328874/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:30:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c88n8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c88n8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wmhbr to functional-328874
	  Normal   Pulling    12m (x5 over 15m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     12m (x5 over 15m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    5m9s (x44 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m9s (x44 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-fc7rz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fcpdg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-328874 describe pod busybox-mount hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr dashboard-metrics-scraper-77bf4d6c4c-fc7rz kubernetes-dashboard-855c9754f9-fcpdg: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-328874 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-328874 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wmhbr" [1c238945-00bb-451e-bda5-6c199ab8393a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-328874 -n functional-328874
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-19 16:40:53.71948193 +0000 UTC m=+1218.068700429
functional_test.go:1645: (dbg) Run:  kubectl --context functional-328874 describe po hello-node-connect-7d85dfc575-wmhbr -n default
functional_test.go:1645: (dbg) kubectl --context functional-328874 describe po hello-node-connect-7d85dfc575-wmhbr -n default:
Name:             hello-node-connect-7d85dfc575-wmhbr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-328874/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:30:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c88n8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c88n8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wmhbr to functional-328874
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-328874 logs hello-node-connect-7d85dfc575-wmhbr -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-328874 logs hello-node-connect-7d85dfc575-wmhbr -n default: exit status 1 (111.865547ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wmhbr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-328874 logs hello-node-connect-7d85dfc575-wmhbr -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-328874 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-wmhbr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-328874/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:30:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c88n8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c88n8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wmhbr to functional-328874
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-328874 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-328874 logs -l app=hello-node-connect: exit status 1 (88.742702ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wmhbr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-328874 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-328874 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.223.151
IPs:                      10.96.223.151
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32348/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-328874
helpers_test.go:243: (dbg) docker inspect functional-328874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d",
	        "Created": "2025-10-19T16:28:13.729951307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19819,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:28:13.791812897Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/hosts",
	        "LogPath": "/var/lib/docker/containers/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d/53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d-json.log",
	        "Name": "/functional-328874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-328874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-328874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53040687d3af4aa4f246cbf70cbdc49472e9cbb415776775c16acf90f26a241d",
	                "LowerDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0a3180ed19a89724c279325b0a2be6c2dbb6f6ebcefce5308282dc22bef2e46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-328874",
	                "Source": "/var/lib/docker/volumes/functional-328874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-328874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-328874",
	                "name.minikube.sigs.k8s.io": "functional-328874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63c2e7e4d28408d781b7724852b5b01955a1749d2feb6c063a7d2b19f26b2331",
	            "SandboxKey": "/var/run/docker/netns/63c2e7e4d284",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-328874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:4a:0e:80:5a:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "600ec8e9409c4b1d1b152089c6647bbccf98cddbf5d30c37188777771b635dc6",
	                    "EndpointID": "783860c0a458e4f043008b0606242fa9fe42a3513ad595527519d5052efb0a65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-328874",
	                        "53040687d3af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-328874 -n functional-328874
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 logs -n 25: (1.513531836s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-328874 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ kubectl │ functional-328874 kubectl -- --context functional-328874 get pods                                                         │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ start   │ -p functional-328874 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:30 UTC │
	│ service │ invalid-svc -p functional-328874                                                                                          │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ cp      │ functional-328874 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ config  │ functional-328874 config unset cpus                                                                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ config  │ functional-328874 config get cpus                                                                                         │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ config  │ functional-328874 config set cpus 2                                                                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ config  │ functional-328874 config get cpus                                                                                         │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ config  │ functional-328874 config unset cpus                                                                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ ssh     │ functional-328874 ssh -n functional-328874 sudo cat /home/docker/cp-test.txt                                              │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ config  │ functional-328874 config get cpus                                                                                         │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ ssh     │ functional-328874 ssh echo hello                                                                                          │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ cp      │ functional-328874 cp functional-328874:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd376228002/001/cp-test.txt │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ ssh     │ functional-328874 ssh cat /etc/hostname                                                                                   │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ ssh     │ functional-328874 ssh -n functional-328874 sudo cat /home/docker/cp-test.txt                                              │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ tunnel  │ functional-328874 tunnel --alsologtostderr                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ tunnel  │ functional-328874 tunnel --alsologtostderr                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ cp      │ functional-328874 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ tunnel  │ functional-328874 tunnel --alsologtostderr                                                                                │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │                     │
	│ ssh     │ functional-328874 ssh -n functional-328874 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ addons  │ functional-328874 addons list                                                                                             │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	│ addons  │ functional-328874 addons list -o json                                                                                     │ functional-328874 │ jenkins │ v1.37.0 │ 19 Oct 25 16:30 UTC │ 19 Oct 25 16:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:57
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:57.714940   24026 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:57.715101   24026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:57.715106   24026 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:57.715110   24026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:57.715370   24026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:29:57.715724   24026 out.go:368] Setting JSON to false
	I1019 16:29:57.716543   24026 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":746,"bootTime":1760890652,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:29:57.716599   24026 start.go:143] virtualization:  
	I1019 16:29:57.720150   24026 out.go:179] * [functional-328874] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:29:57.724068   24026 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:57.724079   24026 notify.go:221] Checking for updates...
	I1019 16:29:57.728076   24026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:57.730989   24026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:29:57.733803   24026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:29:57.736685   24026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:29:57.739627   24026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:57.743097   24026 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:57.743211   24026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:57.779820   24026 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:29:57.779920   24026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:57.852022   24026 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-19 16:29:57.84334569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:29:57.852109   24026 docker.go:319] overlay module found
	I1019 16:29:57.855306   24026 out.go:179] * Using the docker driver based on existing profile
	I1019 16:29:57.858227   24026 start.go:309] selected driver: docker
	I1019 16:29:57.858237   24026 start.go:930] validating driver "docker" against &{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:57.858340   24026 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:57.858445   24026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:57.911900   24026 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-19 16:29:57.90335795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:29:57.912302   24026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:29:57.912326   24026 cni.go:84] Creating CNI manager for ""
	I1019 16:29:57.912379   24026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:29:57.912425   24026 start.go:353] cluster config:
	{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:57.915585   24026 out.go:179] * Starting "functional-328874" primary control-plane node in "functional-328874" cluster
	I1019 16:29:57.918465   24026 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:29:57.921350   24026 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:29:57.924154   24026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:29:57.924206   24026 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 16:29:57.924213   24026 cache.go:59] Caching tarball of preloaded images
	I1019 16:29:57.924309   24026 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 16:29:57.924318   24026 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:29:57.924424   24026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/config.json ...
	I1019 16:29:57.924626   24026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:29:57.944301   24026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 16:29:57.944313   24026 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 16:29:57.944324   24026 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:29:57.944345   24026 start.go:360] acquireMachinesLock for functional-328874: {Name:mk7f6e00d18e89f418c1c607bdfbddc915c552cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:29:57.944394   24026 start.go:364] duration metric: took 34.65µs to acquireMachinesLock for "functional-328874"
	I1019 16:29:57.944412   24026 start.go:96] Skipping create...Using existing machine configuration
	I1019 16:29:57.944416   24026 fix.go:54] fixHost starting: 
	I1019 16:29:57.944668   24026 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
	I1019 16:29:57.961405   24026 fix.go:112] recreateIfNeeded on functional-328874: state=Running err=<nil>
	W1019 16:29:57.961424   24026 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 16:29:57.964681   24026 out.go:252] * Updating the running docker "functional-328874" container ...
	I1019 16:29:57.964709   24026 machine.go:94] provisionDockerMachine start ...
	I1019 16:29:57.964833   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:29:57.982269   24026 main.go:143] libmachine: Using SSH client type: native
	I1019 16:29:57.982595   24026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1019 16:29:57.982601   24026 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:29:58.134150   24026 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-328874
	
	I1019 16:29:58.134172   24026 ubuntu.go:182] provisioning hostname "functional-328874"
	I1019 16:29:58.134241   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:29:58.152961   24026 main.go:143] libmachine: Using SSH client type: native
	I1019 16:29:58.153254   24026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1019 16:29:58.153264   24026 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-328874 && echo "functional-328874" | sudo tee /etc/hostname
	I1019 16:29:58.307241   24026 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-328874
	
	I1019 16:29:58.307319   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:29:58.324514   24026 main.go:143] libmachine: Using SSH client type: native
	I1019 16:29:58.324810   24026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1019 16:29:58.324824   24026 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-328874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-328874/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-328874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:29:58.474782   24026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:29:58.474799   24026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 16:29:58.474818   24026 ubuntu.go:190] setting up certificates
	I1019 16:29:58.474826   24026 provision.go:84] configureAuth start
	I1019 16:29:58.474895   24026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-328874
	I1019 16:29:58.493668   24026 provision.go:143] copyHostCerts
	I1019 16:29:58.493735   24026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 16:29:58.493742   24026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 16:29:58.493814   24026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 16:29:58.493915   24026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 16:29:58.493919   24026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 16:29:58.493943   24026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 16:29:58.493998   24026 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 16:29:58.494001   24026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 16:29:58.494022   24026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 16:29:58.494071   24026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.functional-328874 san=[127.0.0.1 192.168.49.2 functional-328874 localhost minikube]
	I1019 16:29:58.750690   24026 provision.go:177] copyRemoteCerts
	I1019 16:29:58.750740   24026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:29:58.750777   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:29:58.767506   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:29:58.874375   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:29:58.892516   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 16:29:58.909654   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 16:29:58.927276   24026 provision.go:87] duration metric: took 452.427671ms to configureAuth
	I1019 16:29:58.927293   24026 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:29:58.927498   24026 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:29:58.927605   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:29:58.944450   24026 main.go:143] libmachine: Using SSH client type: native
	I1019 16:29:58.944741   24026 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1019 16:29:58.944756   24026 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:30:04.314359   24026 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:30:04.314372   24026 machine.go:97] duration metric: took 6.349655942s to provisionDockerMachine
	I1019 16:30:04.314382   24026 start.go:293] postStartSetup for "functional-328874" (driver="docker")
	I1019 16:30:04.314410   24026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:30:04.314466   24026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:30:04.314502   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:04.332397   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:04.434598   24026 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:30:04.437960   24026 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:30:04.437980   24026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:30:04.437989   24026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 16:30:04.438063   24026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 16:30:04.438156   24026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 16:30:04.438231   24026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/test/nested/copy/4111/hosts -> hosts in /etc/test/nested/copy/4111
	I1019 16:30:04.438274   24026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4111
	I1019 16:30:04.446022   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 16:30:04.465259   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/test/nested/copy/4111/hosts --> /etc/test/nested/copy/4111/hosts (40 bytes)
	I1019 16:30:04.482555   24026 start.go:296] duration metric: took 168.14427ms for postStartSetup
	I1019 16:30:04.482638   24026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:30:04.482728   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:04.499377   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:04.599497   24026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:30:04.604188   24026 fix.go:56] duration metric: took 6.659764013s for fixHost
	I1019 16:30:04.604203   24026 start.go:83] releasing machines lock for "functional-328874", held for 6.659802454s
	I1019 16:30:04.604268   24026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-328874
	I1019 16:30:04.621047   24026 ssh_runner.go:195] Run: cat /version.json
	I1019 16:30:04.621087   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:04.621345   24026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:30:04.621397   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:04.640416   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:04.643185   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:04.832451   24026 ssh_runner.go:195] Run: systemctl --version
	I1019 16:30:04.838837   24026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:30:04.876497   24026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:30:04.880902   24026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:30:04.880968   24026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:30:04.888849   24026 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 16:30:04.888863   24026 start.go:496] detecting cgroup driver to use...
	I1019 16:30:04.888892   24026 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 16:30:04.888989   24026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:30:04.904739   24026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:30:04.917906   24026 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:30:04.917960   24026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:30:04.933364   24026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:30:04.946815   24026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:30:05.095637   24026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:30:05.229440   24026 docker.go:234] disabling docker service ...
	I1019 16:30:05.229498   24026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:30:05.244795   24026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:30:05.259129   24026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:30:05.388359   24026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:30:05.525432   24026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:30:05.538600   24026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:30:05.553015   24026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:30:05.553085   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.562828   24026 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 16:30:05.562898   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.572870   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.582827   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.592796   24026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:30:05.601877   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.611387   24026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.619941   24026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:30:05.628712   24026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:30:05.636572   24026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:30:05.644270   24026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:30:05.771345   24026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:30:13.592306   24026 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.820933337s)
	I1019 16:30:13.592325   24026 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:30:13.592382   24026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:30:13.596406   24026 start.go:564] Will wait 60s for crictl version
	I1019 16:30:13.596461   24026 ssh_runner.go:195] Run: which crictl
	I1019 16:30:13.600284   24026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:30:13.629988   24026 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 16:30:13.630071   24026 ssh_runner.go:195] Run: crio --version
	I1019 16:30:13.660562   24026 ssh_runner.go:195] Run: crio --version
	I1019 16:30:13.696557   24026 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 16:30:13.699665   24026 cli_runner.go:164] Run: docker network inspect functional-328874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:30:13.715876   24026 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 16:30:13.723331   24026 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1019 16:30:13.726211   24026 kubeadm.go:884] updating cluster {Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:30:13.726336   24026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:30:13.726410   24026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:30:13.769712   24026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:30:13.769723   24026 crio.go:433] Images already preloaded, skipping extraction
	I1019 16:30:13.769780   24026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:30:13.795901   24026 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:30:13.795914   24026 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:30:13.795920   24026 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1019 16:30:13.796019   24026 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-328874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:30:13.796099   24026 ssh_runner.go:195] Run: crio config
	I1019 16:30:13.856195   24026 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1019 16:30:13.856214   24026 cni.go:84] Creating CNI manager for ""
	I1019 16:30:13.856223   24026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:30:13.856236   24026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:30:13.856263   24026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-328874 NodeName:functional-328874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:30:13.856380   24026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-328874"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:30:13.856445   24026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:30:13.864251   24026 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:30:13.864320   24026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:30:13.872001   24026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 16:30:13.885320   24026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:30:13.898635   24026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1019 16:30:13.911871   24026 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:30:13.915800   24026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:30:14.051640   24026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:30:14.066029   24026 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874 for IP: 192.168.49.2
	I1019 16:30:14.066040   24026 certs.go:195] generating shared ca certs ...
	I1019 16:30:14.066055   24026 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:30:14.066193   24026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 16:30:14.066228   24026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 16:30:14.066233   24026 certs.go:257] generating profile certs ...
	I1019 16:30:14.066319   24026 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.key
	I1019 16:30:14.066364   24026 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/apiserver.key.9801e0e0
	I1019 16:30:14.066400   24026 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/proxy-client.key
	I1019 16:30:14.066512   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 16:30:14.066569   24026 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 16:30:14.066576   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 16:30:14.066608   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:30:14.066630   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:30:14.066649   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 16:30:14.066688   24026 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 16:30:14.067346   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:30:14.087479   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:30:14.105560   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:30:14.123580   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 16:30:14.141390   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 16:30:14.159139   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 16:30:14.176849   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:30:14.194085   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1019 16:30:14.211963   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:30:14.229627   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 16:30:14.248796   24026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 16:30:14.266663   24026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:30:14.279568   24026 ssh_runner.go:195] Run: openssl version
	I1019 16:30:14.285789   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 16:30:14.294443   24026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 16:30:14.298154   24026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 16:30:14.298210   24026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 16:30:14.338913   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 16:30:14.346939   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:30:14.354982   24026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:30:14.358432   24026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:30:14.358485   24026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:30:14.399453   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:30:14.407302   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 16:30:14.415250   24026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 16:30:14.418987   24026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 16:30:14.419042   24026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 16:30:14.459913   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 16:30:14.468661   24026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:30:14.472643   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 16:30:14.513766   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 16:30:14.555178   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 16:30:14.596920   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 16:30:14.639937   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 16:30:14.681709   24026 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 16:30:14.722447   24026 kubeadm.go:401] StartCluster: {Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:30:14.722523   24026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:30:14.722609   24026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:30:14.752066   24026 cri.go:89] found id: "1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6"
	I1019 16:30:14.752077   24026 cri.go:89] found id: "36df817c549613852673370c0f49938fbc207fb47d4cd263679facfc499b2a41"
	I1019 16:30:14.752080   24026 cri.go:89] found id: "d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48"
	I1019 16:30:14.752083   24026 cri.go:89] found id: "54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53"
	I1019 16:30:14.752086   24026 cri.go:89] found id: "5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339"
	I1019 16:30:14.752089   24026 cri.go:89] found id: "ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701"
	I1019 16:30:14.752091   24026 cri.go:89] found id: "95eac0237ac8fdaa63e8169bd230e36ccf7838b8e7c4d5f37566685e75a7a12b"
	I1019 16:30:14.752093   24026 cri.go:89] found id: "7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390"
	I1019 16:30:14.752105   24026 cri.go:89] found id: "a41d61747d7d8d2072e48a143c17728fecc131dad0cd0a726e479f092e26fb97"
	I1019 16:30:14.752111   24026 cri.go:89] found id: "3abe305eb2153bd60f79ff2fff8163e5760885025964eedefc8bdeeb9992fe3f"
	I1019 16:30:14.752113   24026 cri.go:89] found id: "95e4983f558d5a89eb1018cb8ceed8fc1f4889ee043337dfb01459a478d9f6d5"
	I1019 16:30:14.752117   24026 cri.go:89] found id: "4b658c5b60a44a5903ff58e359ed41910bde0f945efb7507de264e69d084e850"
	I1019 16:30:14.752119   24026 cri.go:89] found id: "fa2ff165374037767539dc4d0be85f3d671b9e433993403f72e5944a7d913a40"
	I1019 16:30:14.752120   24026 cri.go:89] found id: "b97485b8d51b79ccef5e5a39743af9699756bb113ac9b24e87cb267762ea001d"
	I1019 16:30:14.752123   24026 cri.go:89] found id: "aea89a9640a233ee09649d8c037287cb3f55736285b251abc432f08314a5dc2c"
	I1019 16:30:14.752127   24026 cri.go:89] found id: ""
	I1019 16:30:14.752175   24026 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 16:30:14.762978   24026 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:30:14Z" level=error msg="open /run/runc: no such file or directory"
	I1019 16:30:14.763045   24026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:30:14.770684   24026 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 16:30:14.770693   24026 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 16:30:14.770752   24026 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 16:30:14.777736   24026 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:30:14.778293   24026 kubeconfig.go:125] found "functional-328874" server: "https://192.168.49.2:8441"
	I1019 16:30:14.779668   24026 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 16:30:14.787168   24026 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-19 16:28:25.091221789 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-19 16:30:13.903743575 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1019 16:30:14.787178   24026 kubeadm.go:1161] stopping kube-system containers ...
	I1019 16:30:14.787198   24026 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1019 16:30:14.787251   24026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:30:14.816490   24026 cri.go:89] found id: "1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6"
	I1019 16:30:14.816501   24026 cri.go:89] found id: "36df817c549613852673370c0f49938fbc207fb47d4cd263679facfc499b2a41"
	I1019 16:30:14.816504   24026 cri.go:89] found id: "d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48"
	I1019 16:30:14.816507   24026 cri.go:89] found id: "54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53"
	I1019 16:30:14.816509   24026 cri.go:89] found id: "5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339"
	I1019 16:30:14.816512   24026 cri.go:89] found id: "ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701"
	I1019 16:30:14.816514   24026 cri.go:89] found id: "95eac0237ac8fdaa63e8169bd230e36ccf7838b8e7c4d5f37566685e75a7a12b"
	I1019 16:30:14.816516   24026 cri.go:89] found id: "7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390"
	I1019 16:30:14.816518   24026 cri.go:89] found id: "a41d61747d7d8d2072e48a143c17728fecc131dad0cd0a726e479f092e26fb97"
	I1019 16:30:14.816523   24026 cri.go:89] found id: "3abe305eb2153bd60f79ff2fff8163e5760885025964eedefc8bdeeb9992fe3f"
	I1019 16:30:14.816525   24026 cri.go:89] found id: "95e4983f558d5a89eb1018cb8ceed8fc1f4889ee043337dfb01459a478d9f6d5"
	I1019 16:30:14.816527   24026 cri.go:89] found id: "4b658c5b60a44a5903ff58e359ed41910bde0f945efb7507de264e69d084e850"
	I1019 16:30:14.816539   24026 cri.go:89] found id: "fa2ff165374037767539dc4d0be85f3d671b9e433993403f72e5944a7d913a40"
	I1019 16:30:14.816541   24026 cri.go:89] found id: "b97485b8d51b79ccef5e5a39743af9699756bb113ac9b24e87cb267762ea001d"
	I1019 16:30:14.816543   24026 cri.go:89] found id: "aea89a9640a233ee09649d8c037287cb3f55736285b251abc432f08314a5dc2c"
	I1019 16:30:14.816553   24026 cri.go:89] found id: ""
	I1019 16:30:14.816557   24026 cri.go:252] Stopping containers: [1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6 36df817c549613852673370c0f49938fbc207fb47d4cd263679facfc499b2a41 d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48 54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53 5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339 ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701 95eac0237ac8fdaa63e8169bd230e36ccf7838b8e7c4d5f37566685e75a7a12b 7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390 a41d61747d7d8d2072e48a143c17728fecc131dad0cd0a726e479f092e26fb97 3abe305eb2153bd60f79ff2fff8163e5760885025964eedefc8bdeeb9992fe3f 95e4983f558d5a89eb1018cb8ceed8fc1f4889ee043337dfb01459a478d9f6d5 4b658c5b60a44a5903ff58e359ed41910bde0f945efb7507de264e69d084e850 fa2ff165374037767539dc4d0be85f3d671b9e433993403f72e5944a7d913a40 b97485b8d51b79ccef5e5a39743af9699756bb113ac9b24e87cb267762ea001d aea89a9640a233ee09649d8c037287cb3f5573628
5b251abc432f08314a5dc2c]
	I1019 16:30:14.816610   24026 ssh_runner.go:195] Run: which crictl
	I1019 16:30:14.820246   24026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6 36df817c549613852673370c0f49938fbc207fb47d4cd263679facfc499b2a41 d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48 54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53 5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339 ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701 95eac0237ac8fdaa63e8169bd230e36ccf7838b8e7c4d5f37566685e75a7a12b 7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390 a41d61747d7d8d2072e48a143c17728fecc131dad0cd0a726e479f092e26fb97 3abe305eb2153bd60f79ff2fff8163e5760885025964eedefc8bdeeb9992fe3f 95e4983f558d5a89eb1018cb8ceed8fc1f4889ee043337dfb01459a478d9f6d5 4b658c5b60a44a5903ff58e359ed41910bde0f945efb7507de264e69d084e850 fa2ff165374037767539dc4d0be85f3d671b9e433993403f72e5944a7d913a40 b97485b8d51b79ccef5e5a39743af9699756bb113ac9b24e87cb267762ea001d aea89a
9640a233ee09649d8c037287cb3f55736285b251abc432f08314a5dc2c
	I1019 16:30:14.941073   24026 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1019 16:30:15.069666   24026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:30:15.078518   24026 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Oct 19 16:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 19 16:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 19 16:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 19 16:28 /etc/kubernetes/scheduler.conf
	
	I1019 16:30:15.078612   24026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1019 16:30:15.087061   24026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1019 16:30:15.094971   24026 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:30:15.095028   24026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:30:15.103223   24026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1019 16:30:15.111585   24026 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:30:15.111657   24026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:30:15.119725   24026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1019 16:30:15.127936   24026 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:30:15.127996   24026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:30:15.135749   24026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:30:15.143933   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:15.192525   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:17.624865   24026 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.4323156s)
	I1019 16:30:17.624923   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:17.838962   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:17.909964   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:17.987748   24026 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:30:17.987826   24026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:30:18.487976   24026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:30:18.987966   24026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:30:19.003235   24026 api_server.go:72] duration metric: took 1.015492448s to wait for apiserver process to appear ...
	I1019 16:30:19.003251   24026 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:30:19.003271   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:22.016838   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 16:30:22.016855   24026 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 16:30:22.016868   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:22.097812   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 16:30:22.097831   24026 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 16:30:22.503335   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:22.511655   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:30:22.511672   24026 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:30:23.003952   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:23.012924   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:30:23.012943   24026 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:30:23.503401   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:23.512899   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1019 16:30:23.527721   24026 api_server.go:141] control plane version: v1.34.1
	I1019 16:30:23.527740   24026 api_server.go:131] duration metric: took 4.52448391s to wait for apiserver health ...
	I1019 16:30:23.527749   24026 cni.go:84] Creating CNI manager for ""
	I1019 16:30:23.527755   24026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:30:23.531205   24026 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 16:30:23.534222   24026 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 16:30:23.538421   24026 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 16:30:23.538431   24026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 16:30:23.553025   24026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 16:30:24.041382   24026 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:30:24.046750   24026 system_pods.go:59] 8 kube-system pods found
	I1019 16:30:24.046772   24026 system_pods.go:61] "coredns-66bc5c9577-hxbk8" [df312951-680f-4c66-a346-82716a0ba341] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:30:24.046783   24026 system_pods.go:61] "etcd-functional-328874" [02943ffb-99e2-4014-9c5c-95dd962ff830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 16:30:24.046789   24026 system_pods.go:61] "kindnet-rnknf" [4effd3b2-7abe-4e81-936a-d70056819f13] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 16:30:24.046793   24026 system_pods.go:61] "kube-apiserver-functional-328874" [5ec280ff-1b3f-49db-975e-1925bded7f1c] Pending
	I1019 16:30:24.046801   24026 system_pods.go:61] "kube-controller-manager-functional-328874" [554388f8-39d7-4372-834f-aac24ea4b8e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 16:30:24.046806   24026 system_pods.go:61] "kube-proxy-7lgrr" [6b748d94-9d4a-4034-bcc0-cabfc1bbd9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 16:30:24.046812   24026 system_pods.go:61] "kube-scheduler-functional-328874" [d8d3c411-035c-4c47-81e7-aac552c86a5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 16:30:24.046817   24026 system_pods.go:61] "storage-provisioner" [50fd0888-9ce0-4aed-a3d8-3d09f3e58f1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:30:24.046821   24026 system_pods.go:74] duration metric: took 5.428673ms to wait for pod list to return data ...
	I1019 16:30:24.046827   24026 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:30:24.050127   24026 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 16:30:24.050146   24026 node_conditions.go:123] node cpu capacity is 2
	I1019 16:30:24.050157   24026 node_conditions.go:105] duration metric: took 3.326708ms to run NodePressure ...
	I1019 16:30:24.050220   24026 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 16:30:24.370987   24026 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1019 16:30:24.375781   24026 kubeadm.go:744] kubelet initialised
	I1019 16:30:24.375792   24026 kubeadm.go:745] duration metric: took 4.791386ms waiting for restarted kubelet to initialise ...
	I1019 16:30:24.375805   24026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:30:24.387027   24026 ops.go:34] apiserver oom_adj: -16
	I1019 16:30:24.387040   24026 kubeadm.go:602] duration metric: took 9.616341595s to restartPrimaryControlPlane
	I1019 16:30:24.387047   24026 kubeadm.go:403] duration metric: took 9.664609812s to StartCluster
	I1019 16:30:24.387062   24026 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:30:24.387126   24026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:30:24.387741   24026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:30:24.387958   24026 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:30:24.388235   24026 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:30:24.388273   24026 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 16:30:24.388395   24026 addons.go:70] Setting storage-provisioner=true in profile "functional-328874"
	I1019 16:30:24.388408   24026 addons.go:239] Setting addon storage-provisioner=true in "functional-328874"
	I1019 16:30:24.388409   24026 addons.go:70] Setting default-storageclass=true in profile "functional-328874"
	W1019 16:30:24.388413   24026 addons.go:248] addon storage-provisioner should already be in state true
	I1019 16:30:24.388421   24026 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-328874"
	I1019 16:30:24.388433   24026 host.go:66] Checking if "functional-328874" exists ...
	I1019 16:30:24.388743   24026 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
	I1019 16:30:24.388882   24026 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
	I1019 16:30:24.394406   24026 out.go:179] * Verifying Kubernetes components...
	I1019 16:30:24.401459   24026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:30:24.419560   24026 addons.go:239] Setting addon default-storageclass=true in "functional-328874"
	W1019 16:30:24.419571   24026 addons.go:248] addon default-storageclass should already be in state true
	I1019 16:30:24.419593   24026 host.go:66] Checking if "functional-328874" exists ...
	I1019 16:30:24.421043   24026 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
	I1019 16:30:24.441385   24026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:30:24.444428   24026 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:30:24.444439   24026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:30:24.444506   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:24.451973   24026 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:30:24.451985   24026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:30:24.452046   24026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:30:24.472783   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:24.486457   24026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:30:24.604642   24026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:30:24.638320   24026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:30:24.659298   24026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:30:25.399879   24026 node_ready.go:35] waiting up to 6m0s for node "functional-328874" to be "Ready" ...
	I1019 16:30:25.402866   24026 node_ready.go:49] node "functional-328874" is "Ready"
	I1019 16:30:25.402880   24026 node_ready.go:38] duration metric: took 2.970948ms for node "functional-328874" to be "Ready" ...
	I1019 16:30:25.402892   24026 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:30:25.402951   24026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:30:25.411024   24026 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 16:30:25.413905   24026 addons.go:515] duration metric: took 1.025614474s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 16:30:25.417402   24026 api_server.go:72] duration metric: took 1.029417328s to wait for apiserver process to appear ...
	I1019 16:30:25.417415   24026 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:30:25.417433   24026 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 16:30:25.426718   24026 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1019 16:30:25.427835   24026 api_server.go:141] control plane version: v1.34.1
	I1019 16:30:25.427848   24026 api_server.go:131] duration metric: took 10.428003ms to wait for apiserver health ...
	I1019 16:30:25.427856   24026 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:30:25.431011   24026 system_pods.go:59] 8 kube-system pods found
	I1019 16:30:25.431026   24026 system_pods.go:61] "coredns-66bc5c9577-hxbk8" [df312951-680f-4c66-a346-82716a0ba341] Running
	I1019 16:30:25.431035   24026 system_pods.go:61] "etcd-functional-328874" [02943ffb-99e2-4014-9c5c-95dd962ff830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 16:30:25.431039   24026 system_pods.go:61] "kindnet-rnknf" [4effd3b2-7abe-4e81-936a-d70056819f13] Running
	I1019 16:30:25.431046   24026 system_pods.go:61] "kube-apiserver-functional-328874" [5ec280ff-1b3f-49db-975e-1925bded7f1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 16:30:25.431052   24026 system_pods.go:61] "kube-controller-manager-functional-328874" [554388f8-39d7-4372-834f-aac24ea4b8e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 16:30:25.431055   24026 system_pods.go:61] "kube-proxy-7lgrr" [6b748d94-9d4a-4034-bcc0-cabfc1bbd9b0] Running
	I1019 16:30:25.431061   24026 system_pods.go:61] "kube-scheduler-functional-328874" [d8d3c411-035c-4c47-81e7-aac552c86a5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 16:30:25.431065   24026 system_pods.go:61] "storage-provisioner" [50fd0888-9ce0-4aed-a3d8-3d09f3e58f1f] Running
	I1019 16:30:25.431071   24026 system_pods.go:74] duration metric: took 3.209218ms to wait for pod list to return data ...
	I1019 16:30:25.431077   24026 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:30:25.433396   24026 default_sa.go:45] found service account: "default"
	I1019 16:30:25.433410   24026 default_sa.go:55] duration metric: took 2.327746ms for default service account to be created ...
	I1019 16:30:25.433418   24026 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:30:25.436600   24026 system_pods.go:86] 8 kube-system pods found
	I1019 16:30:25.436615   24026 system_pods.go:89] "coredns-66bc5c9577-hxbk8" [df312951-680f-4c66-a346-82716a0ba341] Running
	I1019 16:30:25.436624   24026 system_pods.go:89] "etcd-functional-328874" [02943ffb-99e2-4014-9c5c-95dd962ff830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 16:30:25.436629   24026 system_pods.go:89] "kindnet-rnknf" [4effd3b2-7abe-4e81-936a-d70056819f13] Running
	I1019 16:30:25.436636   24026 system_pods.go:89] "kube-apiserver-functional-328874" [5ec280ff-1b3f-49db-975e-1925bded7f1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 16:30:25.436641   24026 system_pods.go:89] "kube-controller-manager-functional-328874" [554388f8-39d7-4372-834f-aac24ea4b8e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 16:30:25.436645   24026 system_pods.go:89] "kube-proxy-7lgrr" [6b748d94-9d4a-4034-bcc0-cabfc1bbd9b0] Running
	I1019 16:30:25.436650   24026 system_pods.go:89] "kube-scheduler-functional-328874" [d8d3c411-035c-4c47-81e7-aac552c86a5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 16:30:25.436653   24026 system_pods.go:89] "storage-provisioner" [50fd0888-9ce0-4aed-a3d8-3d09f3e58f1f] Running
	I1019 16:30:25.436660   24026 system_pods.go:126] duration metric: took 3.235819ms to wait for k8s-apps to be running ...
	I1019 16:30:25.436667   24026 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:30:25.436722   24026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:30:25.450689   24026 system_svc.go:56] duration metric: took 14.01241ms WaitForService to wait for kubelet
	I1019 16:30:25.450707   24026 kubeadm.go:587] duration metric: took 1.062728355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:30:25.450737   24026 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:30:25.453376   24026 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 16:30:25.453392   24026 node_conditions.go:123] node cpu capacity is 2
	I1019 16:30:25.453402   24026 node_conditions.go:105] duration metric: took 2.659989ms to run NodePressure ...
	I1019 16:30:25.453412   24026 start.go:242] waiting for startup goroutines ...
	I1019 16:30:25.453418   24026 start.go:247] waiting for cluster config update ...
	I1019 16:30:25.453428   24026 start.go:256] writing updated cluster config ...
	I1019 16:30:25.453752   24026 ssh_runner.go:195] Run: rm -f paused
	I1019 16:30:25.458164   24026 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:30:25.461864   24026 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxbk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:25.471537   24026 pod_ready.go:94] pod "coredns-66bc5c9577-hxbk8" is "Ready"
	I1019 16:30:25.471552   24026 pod_ready.go:86] duration metric: took 9.675926ms for pod "coredns-66bc5c9577-hxbk8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:25.479086   24026 pod_ready.go:83] waiting for pod "etcd-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 16:30:27.485268   24026 pod_ready.go:104] pod "etcd-functional-328874" is not "Ready", error: <nil>
	I1019 16:30:29.486517   24026 pod_ready.go:94] pod "etcd-functional-328874" is "Ready"
	I1019 16:30:29.486559   24026 pod_ready.go:86] duration metric: took 4.007433202s for pod "etcd-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:29.489197   24026 pod_ready.go:83] waiting for pod "kube-apiserver-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 16:30:31.494649   24026 pod_ready.go:104] pod "kube-apiserver-functional-328874" is not "Ready", error: <nil>
	I1019 16:30:32.994570   24026 pod_ready.go:94] pod "kube-apiserver-functional-328874" is "Ready"
	I1019 16:30:32.994586   24026 pod_ready.go:86] duration metric: took 3.505374972s for pod "kube-apiserver-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.002782   24026 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.009323   24026 pod_ready.go:94] pod "kube-controller-manager-functional-328874" is "Ready"
	I1019 16:30:33.009337   24026 pod_ready.go:86] duration metric: took 6.541563ms for pod "kube-controller-manager-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.012210   24026 pod_ready.go:83] waiting for pod "kube-proxy-7lgrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.017358   24026 pod_ready.go:94] pod "kube-proxy-7lgrr" is "Ready"
	I1019 16:30:33.017374   24026 pod_ready.go:86] duration metric: took 5.151141ms for pod "kube-proxy-7lgrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.019965   24026 pod_ready.go:83] waiting for pod "kube-scheduler-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.593254   24026 pod_ready.go:94] pod "kube-scheduler-functional-328874" is "Ready"
	I1019 16:30:33.593268   24026 pod_ready.go:86] duration metric: took 573.290567ms for pod "kube-scheduler-functional-328874" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:30:33.593279   24026 pod_ready.go:40] duration metric: took 8.135083081s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:30:33.646598   24026 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 16:30:33.649606   24026 out.go:179] * Done! kubectl is now configured to use "functional-328874" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:31:08 functional-328874 crio[3570]: time="2025-10-19T16:31:08.045705736Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-bxlwb Namespace:default ID:50edc18dd6fe594361aef45051c14aaff8c228e92c3d080f1456841cb217f64a UID:3fc9e6db-7402-401a-9675-eb19f4466055 NetNS:/var/run/netns/b3d3977f-d2db-4caa-a2c2-876415b1ce6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d387f0}] Aliases:map[]}"
	Oct 19 16:31:08 functional-328874 crio[3570]: time="2025-10-19T16:31:08.045859705Z" level=info msg="Checking pod default_hello-node-75c85bcc94-bxlwb for CNI network kindnet (type=ptp)"
	Oct 19 16:31:08 functional-328874 crio[3570]: time="2025-10-19T16:31:08.049397744Z" level=info msg="Ran pod sandbox 50edc18dd6fe594361aef45051c14aaff8c228e92c3d080f1456841cb217f64a with infra container: default/hello-node-75c85bcc94-bxlwb/POD" id=b38994dd-34c2-4862-8dca-3bc30c394c3d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 16:31:08 functional-328874 crio[3570]: time="2025-10-19T16:31:08.052795639Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=097f8b21-2ac2-4ae2-b53f-4bd329eceffd name=/runtime.v1.ImageService/PullImage
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.065687357Z" level=info msg="Stopping pod sandbox: 9e0921c0d93e5dad9c7447e0ab53c624b823b80434aab4cea91ac2ae80364b35" id=0137fc60-e189-48ba-af6b-3aa0543e086f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.065749117Z" level=info msg="Stopped pod sandbox (already stopped): 9e0921c0d93e5dad9c7447e0ab53c624b823b80434aab4cea91ac2ae80364b35" id=0137fc60-e189-48ba-af6b-3aa0543e086f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.066371691Z" level=info msg="Removing pod sandbox: 9e0921c0d93e5dad9c7447e0ab53c624b823b80434aab4cea91ac2ae80364b35" id=ec274782-88dc-49f2-a07e-81b9eb550f4d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.072170037Z" level=info msg="Removed pod sandbox: 9e0921c0d93e5dad9c7447e0ab53c624b823b80434aab4cea91ac2ae80364b35" id=ec274782-88dc-49f2-a07e-81b9eb550f4d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.072745185Z" level=info msg="Stopping pod sandbox: 055b40bea11a40e6900952d5fae16150611bbb9791d5056778a3ecb0e9bc25a3" id=ab246e95-8731-4706-916e-fe6cc43bca4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.072793842Z" level=info msg="Stopped pod sandbox (already stopped): 055b40bea11a40e6900952d5fae16150611bbb9791d5056778a3ecb0e9bc25a3" id=ab246e95-8731-4706-916e-fe6cc43bca4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.073334906Z" level=info msg="Removing pod sandbox: 055b40bea11a40e6900952d5fae16150611bbb9791d5056778a3ecb0e9bc25a3" id=ec3c1158-a43b-45c0-af53-d6fe79a0b490 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.077209291Z" level=info msg="Removed pod sandbox: 055b40bea11a40e6900952d5fae16150611bbb9791d5056778a3ecb0e9bc25a3" id=ec3c1158-a43b-45c0-af53-d6fe79a0b490 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.077817654Z" level=info msg="Stopping pod sandbox: 9b15c53edd256a6aa9e1041898ccf6d66bbd7ebfb47316b6918fe1c813aa7d07" id=7807dd00-4379-42c4-ae93-8693c2e583f6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.077872965Z" level=info msg="Stopped pod sandbox (already stopped): 9b15c53edd256a6aa9e1041898ccf6d66bbd7ebfb47316b6918fe1c813aa7d07" id=7807dd00-4379-42c4-ae93-8693c2e583f6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.078219993Z" level=info msg="Removing pod sandbox: 9b15c53edd256a6aa9e1041898ccf6d66bbd7ebfb47316b6918fe1c813aa7d07" id=01e34a6c-15c0-4038-84b8-30722e262db0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.082051982Z" level=info msg="Removed pod sandbox: 9b15c53edd256a6aa9e1041898ccf6d66bbd7ebfb47316b6918fe1c813aa7d07" id=01e34a6c-15c0-4038-84b8-30722e262db0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 16:31:18 functional-328874 crio[3570]: time="2025-10-19T16:31:18.991570497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24663423-2999-489b-99c7-660260e7bf14 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:31:29 functional-328874 crio[3570]: time="2025-10-19T16:31:29.992192576Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cefbf82d-3303-4e3b-a4c8-aec366cc13d1 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:31:45 functional-328874 crio[3570]: time="2025-10-19T16:31:45.993959429Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7a2344ca-926f-487c-b78e-e10967dae2e7 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:32:18 functional-328874 crio[3570]: time="2025-10-19T16:32:18.99146832Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b529ef71-1a06-4a65-aa79-67bb4e9227c7 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:32:34 functional-328874 crio[3570]: time="2025-10-19T16:32:34.991460872Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6a4087ad-7da4-4845-8b0b-b7aa648e88e5 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:33:44 functional-328874 crio[3570]: time="2025-10-19T16:33:44.991332381Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9f82be4d-c78c-45b0-9068-731b0bb6138e name=/runtime.v1.ImageService/PullImage
	Oct 19 16:34:02 functional-328874 crio[3570]: time="2025-10-19T16:34:02.991341842Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=21bae94e-8a08-4b40-ba9c-cbb2da782113 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:36:31 functional-328874 crio[3570]: time="2025-10-19T16:36:31.991315408Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24b2a7b5-4023-4892-a86c-9c87882db253 name=/runtime.v1.ImageService/PullImage
	Oct 19 16:36:48 functional-328874 crio[3570]: time="2025-10-19T16:36:48.991345404Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8a9c3cce-b50e-49d9-9c69-bdc79ccbc025 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4fc5468d412ab       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   8e4095d71384a       sp-pod                                      default
	626b8b71bedcf       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   429240aef69b7       nginx-svc                                   default
	2a30fa624bf4d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   c1380cde84642       kindnet-rnknf                               kube-system
	c2db568fd48f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   639dc60f92ddc       coredns-66bc5c9577-hxbk8                    kube-system
	c17f1c2b86d1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   496cfd542f3db       storage-provisioner                         kube-system
	9b3bfe835e794       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   a5d4684171aca       kube-proxy-7lgrr                            kube-system
	2f25a2af6a9cc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   c395cadc0aea3       kube-apiserver-functional-328874            kube-system
	42d7f470581bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   6f315785f1a8f       kube-controller-manager-functional-328874   kube-system
	7126285a58c3e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   27b74a80b5cb4       kube-scheduler-functional-328874            kube-system
	d3902c36a60d7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   fb6c70b27564a       etcd-functional-328874                      kube-system
	05e3c65bac33f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Created             storage-provisioner       2                   496cfd542f3db       storage-provisioner                         kube-system
	1ea61893b4a7a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   c1380cde84642       kindnet-rnknf                               kube-system
	d6dd353e9326f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   fb6c70b27564a       etcd-functional-328874                      kube-system
	54f8743c36bea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   6f315785f1a8f       kube-controller-manager-functional-328874   kube-system
	5884904d4730f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   27b74a80b5cb4       kube-scheduler-functional-328874            kube-system
	ecffc18cc92ec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   639dc60f92ddc       coredns-66bc5c9577-hxbk8                    kube-system
	7a11f6e21bfb8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   a5d4684171aca       kube-proxy-7lgrr                            kube-system
	
	
	==> coredns [c2db568fd48f78c46674f1afb57aa1e8b987b73039cd0ed425d5b96ac2f24234] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42576 - 29638 "HINFO IN 2105635577704700243.8786815664405587714. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023560902s
	
	
	==> coredns [ecffc18cc92ece5f5d22b35bddc4303f0531a00e27fe53c004b077ecc57e7701] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54572 - 50715 "HINFO IN 5367120530847457092.502566715447863678. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01206511s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-328874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-328874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-328874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_28_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:28:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-328874
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:40:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:39:22 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:39:22 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:39:22 +0000   Sun, 19 Oct 2025 16:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:39:22 +0000   Sun, 19 Oct 2025 16:29:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-328874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                8811ae4d-9696-4db7-b42c-b5c677cbe300
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bxlwb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-wmhbr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-hxbk8                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-328874                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-rnknf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-328874             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-328874    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7lgrr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-328874             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-328874 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-328874 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-328874 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-328874 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-328874 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-328874 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-328874 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-328874 event: Registered Node functional-328874 in Controller
	
	
	==> dmesg <==
	[Oct19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014509] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499579] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033288] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.729802] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.182201] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 16:21] overlayfs: idmapped layers are currently not supported
	[  +0.059278] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct19 16:27] overlayfs: idmapped layers are currently not supported
	[Oct19 16:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d3902c36a60d7007bf118a856812b41bbd3adb3c5931d84ce26f27391ceb386d] <==
	{"level":"warn","ts":"2025-10-19T16:30:20.686100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.720484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.753851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.772794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.804826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.826334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.849675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.883039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.910489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.945193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:20.968656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.052918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.085989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.112337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.143300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.176094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.204522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.236658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.260838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.287335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.317017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:30:21.377740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:40:19.434664Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2025-10-19T16:40:19.458671Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1077,"took":"23.602786ms","hash":4239443231,"current-db-size-bytes":3100672,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1314816,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-19T16:40:19.458720Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4239443231,"revision":1077,"compact-revision":-1}
	
	
	==> etcd [d6dd353e9326fab74dfd667e341f6f1a5a012c6be5d14e6a42b8c35c9343df48] <==
	{"level":"warn","ts":"2025-10-19T16:29:45.995271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.012926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.038028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.069418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.087178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.100108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:29:46.186596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:29:59.121479Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:29:59.121580Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-328874","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:29:59.121706Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:29:59.266108Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:29:59.266214Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.266257Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-19T16:29:59.266296Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T16:29:59.266360Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266469Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:29:59.266477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266599Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:29:59.266655Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:29:59.266686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.270281Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:29:59.270356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:29:59.270387Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:29:59.270394Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-328874","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:40:55 up 23 min,  0 user,  load average: 0.42, 0.47, 0.69
	Linux functional-328874 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ea61893b4a7a6c9c6cb14315b9c3c4ef56bd18e14a9d5a609bc309ab6466cd6] <==
	I1019 16:29:44.133047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:29:44.133391       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:29:44.133568       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:29:44.133618       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:29:44.133656       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:29:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:29:44.315685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:29:44.315768       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:29:44.318986       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:29:44.319876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 16:29:47.421198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:29:47.421292       1 metrics.go:72] Registering metrics
	I1019 16:29:47.421369       1 controller.go:711] "Syncing nftables rules"
	I1019 16:29:54.315423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:29:54.315488       1 main.go:301] handling current node
	
	
	==> kindnet [2a30fa624bf4dd2f33d842820173a161c4875220f5d27219cbbda9f9e0587543] <==
	I1019 16:38:53.700633       1 main.go:301] handling current node
	I1019 16:39:03.701092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:03.701125       1 main.go:301] handling current node
	I1019 16:39:13.700867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:13.700906       1 main.go:301] handling current node
	I1019 16:39:23.700052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:23.700087       1 main.go:301] handling current node
	I1019 16:39:33.700590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:33.700622       1 main.go:301] handling current node
	I1019 16:39:43.702737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:43.702848       1 main.go:301] handling current node
	I1019 16:39:53.709867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:39:53.709989       1 main.go:301] handling current node
	I1019 16:40:03.700804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:03.700848       1 main.go:301] handling current node
	I1019 16:40:13.701325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:13.701372       1 main.go:301] handling current node
	I1019 16:40:23.709325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:23.709362       1 main.go:301] handling current node
	I1019 16:40:33.700495       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:33.700542       1 main.go:301] handling current node
	I1019 16:40:43.701052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:43.701084       1 main.go:301] handling current node
	I1019 16:40:53.710624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:40:53.710657       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f25a2af6a9cc31bf15b450f4695b6ac691c23c31fd1be113cad3f103d1ec715] <==
	I1019 16:30:22.180220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1019 16:30:22.182954       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 16:30:22.183687       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 16:30:22.183913       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 16:30:22.184296       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 16:30:22.184313       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 16:30:22.185972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:30:22.225863       1 cache.go:39] Caches are synced for autoregister controller
	I1019 16:30:22.233114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 16:30:22.263348       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:30:22.971820       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 16:30:23.041786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:30:24.033046       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:30:24.231432       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:30:24.346306       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:30:24.358982       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:30:36.937259       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.219.36"}
	I1019 16:30:36.956286       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:30:36.956762       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:30:43.684956       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.50.63"}
	I1019 16:30:53.210491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:30:53.378974       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.223.151"}
	E1019 16:31:00.523965       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:46988: use of closed network connection
	I1019 16:31:07.811454       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.2.20"}
	I1019 16:40:22.148035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [42d7f470581bc38cb1d34a75915181d476073df2904b73faaf6f95394f0ce878] <==
	I1019 16:30:25.577770       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:30:25.578452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:30:25.580709       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:30:25.584299       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:30:25.586413       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:30:25.596043       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 16:30:25.600349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:30:25.606736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:30:25.606877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:30:25.606911       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 16:30:25.606942       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 16:30:25.607044       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 16:30:25.607151       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 16:30:25.607259       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-328874"
	I1019 16:30:25.607327       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 16:30:25.612391       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:30:25.612491       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:30:25.616005       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 16:30:25.616095       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:30:25.617143       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:30:25.617362       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 16:30:25.618605       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 16:30:25.618665       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:30:25.625427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:30:25.651745       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [54f8743c36bea2aa4415c7ed67c42430c15c73d2a01401b61764be1ecd33ed53] <==
	I1019 16:29:50.483150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 16:29:50.481279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:29:50.483834       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:29:50.486161       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 16:29:50.487722       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 16:29:50.489925       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:29:50.495116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:29:50.498677       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:29:50.498701       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 16:29:50.498743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 16:29:50.502676       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:29:50.509303       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:29:50.512624       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 16:29:50.524477       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 16:29:50.524518       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 16:29:50.524700       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 16:29:50.524969       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:29:50.525116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:29:50.534315       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:29:50.538482       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:29:50.540732       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:29:50.543571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 16:29:50.545342       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:29:50.566688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 16:29:50.577130       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [7a11f6e21bfb845abafa9df5da09529c367cb557ccb147697a4b34da4543a390] <==
	I1019 16:29:43.726781       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:29:43.832211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1019 16:29:43.833032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-328874&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 16:29:47.370211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:29:47.370246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:29:47.370378       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:29:47.494700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:29:47.494772       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:29:47.573723       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:29:47.582950       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:29:47.582986       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:47.584116       1 config.go:200] "Starting service config controller"
	I1019 16:29:47.584134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:29:47.585219       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:29:47.585228       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:29:47.585252       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:29:47.585256       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:29:47.585643       1 config.go:309] "Starting node config controller"
	I1019 16:29:47.585661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:29:47.585668       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:29:47.686593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:29:47.686694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:29:47.686708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [9b3bfe835e794840778b4b449dee89f256dd16543b8f0625d4f050d27633df26] <==
	I1019 16:30:23.466762       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:30:23.574627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:30:23.675182       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:30:23.675226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:30:23.675298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:30:23.795638       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:30:23.795702       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:30:23.814710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:30:23.815138       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:30:23.815172       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:30:23.816585       1 config.go:200] "Starting service config controller"
	I1019 16:30:23.816610       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:30:23.823612       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:30:23.823632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:30:23.823650       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:30:23.823654       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:30:23.824102       1 config.go:309] "Starting node config controller"
	I1019 16:30:23.824110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:30:23.824116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:30:23.917042       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:30:23.925317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:30:23.925359       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5884904d4730f90913fe442e22cb4ea363c81c57933ad1d9d3277b2a86688339] <==
	I1019 16:29:45.988661       1 serving.go:386] Generated self-signed cert in-memory
	I1019 16:29:48.013877       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:29:48.013921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:29:48.031708       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:29:48.034084       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 16:29:48.034186       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 16:29:48.034260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:29:48.035096       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.035172       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.035587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:48.035649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:48.134357       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 16:29:48.135727       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:48.135863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:59.114849       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:29:59.114872       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:29:59.114891       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:29:59.114918       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:29:59.114936       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 16:29:59.114956       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1019 16:29:59.115196       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:29:59.115253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7126285a58c3e9b3745939dc652ca88f66128307d352144a3e23f7ed2758776a] <==
	I1019 16:30:20.102920       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:30:22.022981       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:30:22.023094       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 16:30:22.023136       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:30:22.029075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:30:22.111338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:30:22.115869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:30:22.118489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:30:22.122842       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:30:22.124435       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:30:22.122886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:30:22.227515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:38:19 functional-328874 kubelet[3897]: E1019 16:38:19.990727    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:38:19 functional-328874 kubelet[3897]: E1019 16:38:19.990730    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:38:30 functional-328874 kubelet[3897]: E1019 16:38:30.991532    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:38:34 functional-328874 kubelet[3897]: E1019 16:38:34.991546    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:38:41 functional-328874 kubelet[3897]: E1019 16:38:41.991004    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:38:48 functional-328874 kubelet[3897]: E1019 16:38:48.991553    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:38:53 functional-328874 kubelet[3897]: E1019 16:38:53.992017    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:39:03 functional-328874 kubelet[3897]: E1019 16:39:03.992455    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:39:08 functional-328874 kubelet[3897]: E1019 16:39:08.990751    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:39:18 functional-328874 kubelet[3897]: E1019 16:39:18.991607    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:39:19 functional-328874 kubelet[3897]: E1019 16:39:19.991158    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:39:32 functional-328874 kubelet[3897]: E1019 16:39:32.990795    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:39:34 functional-328874 kubelet[3897]: E1019 16:39:34.991482    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:39:44 functional-328874 kubelet[3897]: E1019 16:39:44.991260    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:39:47 functional-328874 kubelet[3897]: E1019 16:39:47.991869    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:39:58 functional-328874 kubelet[3897]: E1019 16:39:58.991175    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:39:58 functional-328874 kubelet[3897]: E1019 16:39:58.991662    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:12 functional-328874 kubelet[3897]: E1019 16:40:12.991049    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:13 functional-328874 kubelet[3897]: E1019 16:40:13.991320    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:40:25 functional-328874 kubelet[3897]: E1019 16:40:25.991160    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:25 functional-328874 kubelet[3897]: E1019 16:40:25.991866    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:40:36 functional-328874 kubelet[3897]: E1019 16:40:36.990846    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:40:38 functional-328874 kubelet[3897]: E1019 16:40:38.991729    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	Oct 19 16:40:47 functional-328874 kubelet[3897]: E1019 16:40:47.992070    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bxlwb" podUID="3fc9e6db-7402-401a-9675-eb19f4466055"
	Oct 19 16:40:52 functional-328874 kubelet[3897]: E1019 16:40:52.991367    3897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wmhbr" podUID="1c238945-00bb-451e-bda5-6c199ab8393a"
	
	
	==> storage-provisioner [05e3c65bac33f3e2ebc1b8f61739cbba1364ac75b1225099662a573a8cb1277d] <==
	
	
	==> storage-provisioner [c17f1c2b86d1b91fd61e5d85131b86025c816acf50368a3e86a34605f593c0f2] <==
	W1019 16:40:31.719600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:33.723076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:33.729980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:35.733276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:35.738513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:37.742331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:37.749194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:39.752097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:39.756732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:41.760289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:41.764430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:43.767281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:43.773869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:45.778221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:45.782603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:47.786608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:47.793485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:49.796600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:49.801090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:51.804406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:51.811232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:53.814586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:53.820604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:55.826925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:40:55.834418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-328874 -n functional-328874
helpers_test.go:269: (dbg) Run:  kubectl --context functional-328874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-328874 describe pod hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr
helpers_test.go:290: (dbg) kubectl --context functional-328874 describe pod hello-node-75c85bcc94-bxlwb hello-node-connect-7d85dfc575-wmhbr:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-bxlwb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-328874/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:31:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bzlqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bzlqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bxlwb to functional-328874
	  Normal   Pulling    6m54s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m46s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m46s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-wmhbr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-328874/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:30:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c88n8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c88n8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wmhbr to functional-328874
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-328874 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-328874 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bxlwb" [3fc9e6db-7402-401a-9675-eb19f4466055] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1019 16:31:36.529883    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:33:52.668014    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:34:20.372140    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:52.668323    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-328874 -n functional-328874
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-19 16:41:08.202470384 +0000 UTC m=+1232.551688883
functional_test.go:1460: (dbg) Run:  kubectl --context functional-328874 describe po hello-node-75c85bcc94-bxlwb -n default
functional_test.go:1460: (dbg) kubectl --context functional-328874 describe po hello-node-75c85bcc94-bxlwb -n default:
Name:             hello-node-75c85bcc94-bxlwb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-328874/192.168.49.2
Start Time:       Sun, 19 Oct 2025 16:31:07 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bzlqn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bzlqn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bxlwb to functional-328874
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m58s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m58s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-328874 logs hello-node-75c85bcc94-bxlwb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-328874 logs hello-node-75c85bcc94-bxlwb -n default: exit status 1 (97.886016ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bxlwb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-328874 logs hello-node-75c85bcc94-bxlwb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 service --namespace=default --https --url hello-node: exit status 115 (569.034069ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31995
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-328874 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 service hello-node --url --format={{.IP}}: exit status 115 (546.334346ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-328874 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 service hello-node --url: exit status 115 (486.414599ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31995
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-328874 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31995
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image load --daemon kicbase/echo-server:functional-328874 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-328874" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image load --daemon kicbase/echo-server:functional-328874 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-328874" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-328874
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image load --daemon kicbase/echo-server:functional-328874 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-328874" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image save kicbase/echo-server:functional-328874 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1019 16:41:22.085656   31858 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:41:22.085905   31858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:22.085918   31858 out.go:374] Setting ErrFile to fd 2...
	I1019 16:41:22.085923   31858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:22.086212   31858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:41:22.086880   31858 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:41:22.086999   31858 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:41:22.087520   31858 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
	I1019 16:41:22.109553   31858 ssh_runner.go:195] Run: systemctl --version
	I1019 16:41:22.109614   31858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
	I1019 16:41:22.128603   31858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
	I1019 16:41:22.233244   31858 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1019 16:41:22.233305   31858 cache_images.go:255] Failed to load cached images for "functional-328874": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1019 16:41:22.233324   31858 cache_images.go:267] failed pushing to: functional-328874

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-328874
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image save --daemon kicbase/echo-server:functional-328874 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-328874
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-328874: exit status 1 (20.1037ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-328874

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-328874

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.99s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-280655 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-280655 --output=json --user=testUser: exit status 80 (1.990510826s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d734a7d3-1eca-424b-b2ef-6afab462cb13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-280655 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"679ba011-e30d-42a7-ae6e-1b1cac2e02c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T16:58:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"540b757e-082b-4e2d-b158-7821a4b48e63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-280655 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.99s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.25s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-280655 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-280655 --output=json --user=testUser: exit status 80 (2.245685653s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95ec42ba-0842-41b1-b87d-6939ccbf746a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-280655 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9bbcc7c1-0c67-4cd1-8be7-704083031dca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T16:59:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"8882c539-b790-4261-910e-8aa0427591fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-280655 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.25s)

                                                
                                    
x
+
TestPause/serial/Pause (8.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-752547 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-752547 --alsologtostderr -v=5: exit status 80 (2.12141383s)

                                                
                                                
-- stdout --
	* Pausing node pause-752547 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:16:48.368786  147628 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:48.369027  147628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:48.369056  147628 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:48.369075  147628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:48.369377  147628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:16:48.369695  147628 out.go:368] Setting JSON to false
	I1019 17:16:48.369755  147628 mustload.go:66] Loading cluster: pause-752547
	I1019 17:16:48.370251  147628 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:48.371418  147628 cli_runner.go:164] Run: docker container inspect pause-752547 --format={{.State.Status}}
	I1019 17:16:48.393567  147628 host.go:66] Checking if "pause-752547" exists ...
	I1019 17:16:48.393912  147628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:48.490863  147628 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 17:16:48.479547324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:48.491784  147628 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-752547 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:16:48.495286  147628 out.go:179] * Pausing node pause-752547 ... 
	I1019 17:16:48.499210  147628 host.go:66] Checking if "pause-752547" exists ...
	I1019 17:16:48.499545  147628 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:48.499605  147628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:48.523996  147628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:48.633476  147628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:48.650890  147628 pause.go:52] kubelet running: true
	I1019 17:16:48.650953  147628 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:48.911919  147628 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:48.912002  147628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:48.990761  147628 cri.go:89] found id: "07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83"
	I1019 17:16:48.990779  147628 cri.go:89] found id: "0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0"
	I1019 17:16:48.990783  147628 cri.go:89] found id: "b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2"
	I1019 17:16:48.990787  147628 cri.go:89] found id: "b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb"
	I1019 17:16:48.990790  147628 cri.go:89] found id: "8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82"
	I1019 17:16:48.990794  147628 cri.go:89] found id: "94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e"
	I1019 17:16:48.990806  147628 cri.go:89] found id: "bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7"
	I1019 17:16:48.990809  147628 cri.go:89] found id: "6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a"
	I1019 17:16:48.990812  147628 cri.go:89] found id: "334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac"
	I1019 17:16:48.990818  147628 cri.go:89] found id: "4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d"
	I1019 17:16:48.990821  147628 cri.go:89] found id: "47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0"
	I1019 17:16:48.990824  147628 cri.go:89] found id: "ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2"
	I1019 17:16:48.990827  147628 cri.go:89] found id: "3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997"
	I1019 17:16:48.990830  147628 cri.go:89] found id: "94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c"
	I1019 17:16:48.990832  147628 cri.go:89] found id: ""
	I1019 17:16:48.990879  147628 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:49.004505  147628 retry.go:31] will retry after 276.434693ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:49Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:49.282027  147628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:49.313631  147628 pause.go:52] kubelet running: false
	I1019 17:16:49.313757  147628 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:49.518845  147628 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:49.518930  147628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:49.612526  147628 cri.go:89] found id: "07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83"
	I1019 17:16:49.612555  147628 cri.go:89] found id: "0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0"
	I1019 17:16:49.612569  147628 cri.go:89] found id: "b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2"
	I1019 17:16:49.612572  147628 cri.go:89] found id: "b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb"
	I1019 17:16:49.612575  147628 cri.go:89] found id: "8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82"
	I1019 17:16:49.612581  147628 cri.go:89] found id: "94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e"
	I1019 17:16:49.612584  147628 cri.go:89] found id: "bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7"
	I1019 17:16:49.612587  147628 cri.go:89] found id: "6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a"
	I1019 17:16:49.612590  147628 cri.go:89] found id: "334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac"
	I1019 17:16:49.612597  147628 cri.go:89] found id: "4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d"
	I1019 17:16:49.612601  147628 cri.go:89] found id: "47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0"
	I1019 17:16:49.612604  147628 cri.go:89] found id: "ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2"
	I1019 17:16:49.612607  147628 cri.go:89] found id: "3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997"
	I1019 17:16:49.612611  147628 cri.go:89] found id: "94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c"
	I1019 17:16:49.612614  147628 cri.go:89] found id: ""
	I1019 17:16:49.612669  147628 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:49.624820  147628 retry.go:31] will retry after 415.541568ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:49Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:50.041477  147628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:50.059209  147628 pause.go:52] kubelet running: false
	I1019 17:16:50.059277  147628 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:16:50.255113  147628 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:16:50.255183  147628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:16:50.372117  147628 cri.go:89] found id: "07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83"
	I1019 17:16:50.372187  147628 cri.go:89] found id: "0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0"
	I1019 17:16:50.372205  147628 cri.go:89] found id: "b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2"
	I1019 17:16:50.372223  147628 cri.go:89] found id: "b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb"
	I1019 17:16:50.372241  147628 cri.go:89] found id: "8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82"
	I1019 17:16:50.372278  147628 cri.go:89] found id: "94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e"
	I1019 17:16:50.372294  147628 cri.go:89] found id: "bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7"
	I1019 17:16:50.372310  147628 cri.go:89] found id: "6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a"
	I1019 17:16:50.372328  147628 cri.go:89] found id: "334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac"
	I1019 17:16:50.372362  147628 cri.go:89] found id: "4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d"
	I1019 17:16:50.372379  147628 cri.go:89] found id: "47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0"
	I1019 17:16:50.372396  147628 cri.go:89] found id: "ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2"
	I1019 17:16:50.372425  147628 cri.go:89] found id: "3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997"
	I1019 17:16:50.372449  147628 cri.go:89] found id: "94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c"
	I1019 17:16:50.372466  147628 cri.go:89] found id: ""
	I1019 17:16:50.372592  147628 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:16:50.392776  147628 out.go:203] 
	W1019 17:16:50.395721  147628 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:16:50.395796  147628 out.go:285] * 
	* 
	W1019 17:16:50.400674  147628 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:16:50.404212  147628 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-752547 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-752547
helpers_test.go:243: (dbg) docker inspect pause-752547:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40",
	        "Created": "2025-10-19T17:15:04.33945943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 135557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:15:04.418376088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/hostname",
	        "HostsPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/hosts",
	        "LogPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40-json.log",
	        "Name": "/pause-752547",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-752547:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-752547",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40",
	                "LowerDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-752547",
	                "Source": "/var/lib/docker/volumes/pause-752547/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-752547",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-752547",
	                "name.minikube.sigs.k8s.io": "pause-752547",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "563c3307a3ce22fa1cce6a276d686e75379d9e2397bcaabca1c6583f0b969450",
	            "SandboxKey": "/var/run/docker/netns/563c3307a3ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-752547": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:5c:72:b3:01:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d61af5095f6a50d5c2ca76f229911e6c43a43d0573728031002cc79109832a3f",
	                    "EndpointID": "6a0b4a6e32317415cd6d2e880eee9389cb6c8ee0c90e1f6f6b068c1122cc2a4e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-752547",
	                        "ecacb72ceacb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-752547 -n pause-752547
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-752547 -n pause-752547: exit status 2 (435.589569ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-752547 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-752547 logs -n 25: (1.77824608s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-953581 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p pause-752547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-752547             │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ ssh     │ -p cilium-953581 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status docker --all --full --no-pager                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat docker --no-pager                                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/docker/daemon.json                                                          │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo docker system info                                                                   │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cri-dockerd --version                                                                │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat containerd --no-pager                                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/containerd/config.toml                                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo containerd config dump                                                               │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status crio --all --full --no-pager                                        │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat crio --no-pager                                                        │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo crio config                                                                          │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p cilium-953581                                                                                           │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p force-systemd-env-386165 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-386165 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ pause   │ -p pause-752547 --alsologtostderr -v=5                                                                     │ pause-752547             │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:16:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:16:26.200504  144876 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:26.200701  144876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:26.200725  144876 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:26.200743  144876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:26.201018  144876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:16:26.201472  144876 out.go:368] Setting JSON to false
	I1019 17:16:26.202454  144876 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3534,"bootTime":1760890652,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:16:26.202669  144876 start.go:143] virtualization:  
	I1019 17:16:26.207847  144876 out.go:179] * [force-systemd-env-386165] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:16:26.211285  144876 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:26.211365  144876 notify.go:221] Checking for updates...
	I1019 17:16:26.217401  144876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:26.220461  144876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:16:26.223449  144876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:16:26.226473  144876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:16:26.229488  144876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1019 17:16:26.233023  144876 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:26.233123  144876 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:26.277596  144876 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:16:26.277811  144876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:26.397013  144876 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:16:26.372908646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:26.397143  144876 docker.go:319] overlay module found
	I1019 17:16:26.400388  144876 out.go:179] * Using the docker driver based on user configuration
	I1019 17:16:26.403220  144876 start.go:309] selected driver: docker
	I1019 17:16:26.403258  144876 start.go:930] validating driver "docker" against <nil>
	I1019 17:16:26.403272  144876 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:26.404218  144876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:26.494036  144876 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:16:26.483069382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:26.494217  144876 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:16:26.494459  144876 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 17:16:26.497668  144876 out.go:179] * Using Docker driver with root privileges
	I1019 17:16:26.501839  144876 cni.go:84] Creating CNI manager for ""
	I1019 17:16:26.501923  144876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:26.501933  144876 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:16:26.502197  144876 start.go:353] cluster config:
	{Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:26.507858  144876 out.go:179] * Starting "force-systemd-env-386165" primary control-plane node in "force-systemd-env-386165" cluster
	I1019 17:16:26.513545  144876 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:16:26.517764  144876 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:16:26.520194  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:26.520254  144876 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:16:26.520263  144876 cache.go:59] Caching tarball of preloaded images
	I1019 17:16:26.520361  144876 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:16:26.520370  144876 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:16:26.520494  144876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json ...
	I1019 17:16:26.520514  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json: {Name:mk022111d787195f02e6c57e7230af85b15122b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:26.520730  144876 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:16:26.545009  144876 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:16:26.545034  144876 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:16:26.545048  144876 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:16:26.545070  144876 start.go:360] acquireMachinesLock for force-systemd-env-386165: {Name:mkafa6f7a11b13b8d9ed92f31c974241a4f149dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:16:26.545174  144876 start.go:364] duration metric: took 88.165µs to acquireMachinesLock for "force-systemd-env-386165"
	I1019 17:16:26.545203  144876 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:26.545265  144876 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:16:24.644408  144554 out.go:252] * Updating the running docker "pause-752547" container ...
	I1019 17:16:24.644451  144554 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:24.644546  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:24.677216  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:24.677581  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:24.677595  144554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:24.842529  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-752547
	
	I1019 17:16:24.842591  144554 ubuntu.go:182] provisioning hostname "pause-752547"
	I1019 17:16:24.842663  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:24.865394  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:24.866031  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:24.866050  144554 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-752547 && echo "pause-752547" | sudo tee /etc/hostname
	I1019 17:16:25.042902  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-752547
	
	I1019 17:16:25.042990  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:25.072324  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:25.072738  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:25.072765  144554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-752547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-752547/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-752547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:25.254491  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:25.254527  144554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:16:25.254570  144554 ubuntu.go:190] setting up certificates
	I1019 17:16:25.254581  144554 provision.go:84] configureAuth start
	I1019 17:16:25.254639  144554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-752547
	I1019 17:16:25.279568  144554 provision.go:143] copyHostCerts
	I1019 17:16:25.279646  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:16:25.279665  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:25.279746  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:16:25.279857  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:16:25.279868  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:25.279894  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:16:25.279962  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:16:25.279973  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:25.280001  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:16:25.280055  144554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.pause-752547 san=[127.0.0.1 192.168.76.2 localhost minikube pause-752547]
	I1019 17:16:26.075170  144554 provision.go:177] copyRemoteCerts
	I1019 17:16:26.075317  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:26.075379  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:26.095289  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:26.212638  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:26.233513  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 17:16:26.261982  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:16:26.284016  144554 provision.go:87] duration metric: took 1.029413792s to configureAuth
	I1019 17:16:26.284040  144554 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:26.284253  144554 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:26.284357  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:26.306202  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:26.306504  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:26.306525  144554 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:26.548435  144876 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:16:26.548669  144876 start.go:159] libmachine.API.Create for "force-systemd-env-386165" (driver="docker")
	I1019 17:16:26.548706  144876 client.go:171] LocalClient.Create starting
	I1019 17:16:26.548782  144876 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:16:26.548821  144876 main.go:143] libmachine: Decoding PEM data...
	I1019 17:16:26.548842  144876 main.go:143] libmachine: Parsing certificate...
	I1019 17:16:26.548898  144876 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:16:26.548915  144876 main.go:143] libmachine: Decoding PEM data...
	I1019 17:16:26.548924  144876 main.go:143] libmachine: Parsing certificate...
	I1019 17:16:26.549283  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:16:26.569795  144876 cli_runner.go:211] docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:16:26.569889  144876 network_create.go:284] running [docker network inspect force-systemd-env-386165] to gather additional debugging logs...
	I1019 17:16:26.569912  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165
	W1019 17:16:26.589410  144876 cli_runner.go:211] docker network inspect force-systemd-env-386165 returned with exit code 1
	I1019 17:16:26.589454  144876 network_create.go:287] error running [docker network inspect force-systemd-env-386165]: docker network inspect force-systemd-env-386165: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-386165 not found
	I1019 17:16:26.589468  144876 network_create.go:289] output of [docker network inspect force-systemd-env-386165]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-386165 not found
	
	** /stderr **
	I1019 17:16:26.589575  144876 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:26.607165  144876 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:16:26.607436  144876 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:16:26.607716  144876 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:16:26.608007  144876 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d61af5095f6a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:20:1a:dc:35:6d} reservation:<nil>}
	I1019 17:16:26.608383  144876 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c9700}
	I1019 17:16:26.608405  144876 network_create.go:124] attempt to create docker network force-systemd-env-386165 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:16:26.608472  144876 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-386165 force-systemd-env-386165
	I1019 17:16:26.677047  144876 network_create.go:108] docker network force-systemd-env-386165 192.168.85.0/24 created
	I1019 17:16:26.677080  144876 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-386165" container
	I1019 17:16:26.677169  144876 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:16:26.694316  144876 cli_runner.go:164] Run: docker volume create force-systemd-env-386165 --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:16:26.712616  144876 oci.go:103] Successfully created a docker volume force-systemd-env-386165
	I1019 17:16:26.712704  144876 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-386165-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --entrypoint /usr/bin/test -v force-systemd-env-386165:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:16:27.203407  144876 oci.go:107] Successfully prepared a docker volume force-systemd-env-386165
	I1019 17:16:27.203452  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:27.203471  144876 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:16:27.203554  144876 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-386165:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:16:31.686565  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:31.686587  144554 machine.go:97] duration metric: took 7.042127092s to provisionDockerMachine
	I1019 17:16:31.686598  144554 start.go:293] postStartSetup for "pause-752547" (driver="docker")
	I1019 17:16:31.686609  144554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:31.686678  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:31.686726  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:31.714321  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:31.822400  144554 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:31.825874  144554 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:31.825905  144554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:31.825916  144554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:16:31.825968  144554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:16:31.826213  144554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:16:31.826342  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:31.835102  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:31.868085  144554 start.go:296] duration metric: took 181.471488ms for postStartSetup
	I1019 17:16:31.868184  144554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:31.868231  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:31.888675  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.009599  144554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:32.018060  144554 fix.go:56] duration metric: took 7.420531967s for fixHost
	I1019 17:16:32.018084  144554 start.go:83] releasing machines lock for "pause-752547", held for 7.420588303s
	I1019 17:16:32.018152  144554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-752547
	I1019 17:16:32.040845  144554 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:32.040900  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:32.041143  144554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:32.041200  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:32.065706  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.070609  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.305856  144554 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:32.313413  144554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:32.416115  144554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:32.426660  144554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:32.426726  144554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:32.448820  144554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:16:32.448846  144554 start.go:496] detecting cgroup driver to use...
	I1019 17:16:32.448878  144554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:16:32.448943  144554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:32.471481  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:32.501872  144554 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:32.501937  144554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:32.522194  144554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:32.540603  144554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:32.839145  144554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:33.128937  144554 docker.go:234] disabling docker service ...
	I1019 17:16:33.129013  144554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:33.164688  144554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:33.213780  144554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:33.554487  144554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:33.763758  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:33.781345  144554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:33.802279  144554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:33.802348  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.823975  144554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:16:33.824053  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.835148  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.851110  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.863287  144554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:33.877100  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.888929  144554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.900888  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.914668  144554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:33.928523  144554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:33.941370  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:34.159564  144554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:34.370820  144554 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:34.370894  144554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:34.375201  144554 start.go:564] Will wait 60s for crictl version
	I1019 17:16:34.375257  144554 ssh_runner.go:195] Run: which crictl
	I1019 17:16:34.379411  144554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:34.418111  144554 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:34.418196  144554 ssh_runner.go:195] Run: crio --version
	I1019 17:16:34.456428  144554 ssh_runner.go:195] Run: crio --version
	I1019 17:16:34.496646  144554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:31.675253  144876 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-386165:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471662091s)
	I1019 17:16:31.675283  144876 kic.go:203] duration metric: took 4.471808882s to extract preloaded images to volume ...
	W1019 17:16:31.675427  144876 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:16:31.675534  144876 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:16:31.772075  144876 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-386165 --name force-systemd-env-386165 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-386165 --network force-systemd-env-386165 --ip 192.168.85.2 --volume force-systemd-env-386165:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:16:32.177592  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Running}}
	I1019 17:16:32.206452  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:32.234697  144876 cli_runner.go:164] Run: docker exec force-systemd-env-386165 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:16:32.297740  144876 oci.go:144] the created container "force-systemd-env-386165" has a running status.
	I1019 17:16:32.297767  144876 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa...
	I1019 17:16:33.270420  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1019 17:16:33.270488  144876 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:16:33.298126  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:33.326980  144876 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:16:33.327000  144876 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-386165 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:16:33.396393  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:33.427219  144876 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:33.427330  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.457830  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.458173  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.458183  144876 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:33.690299  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-386165
	
	I1019 17:16:33.690326  144876 ubuntu.go:182] provisioning hostname "force-systemd-env-386165"
	I1019 17:16:33.690414  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.720262  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.720568  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.720593  144876 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-386165 && echo "force-systemd-env-386165" | sudo tee /etc/hostname
	I1019 17:16:33.947801  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-386165
	
	I1019 17:16:33.947882  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.980537  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.980849  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.980876  144876 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-386165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-386165/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-386165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:34.172988  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:34.173079  144876 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:16:34.173113  144876 ubuntu.go:190] setting up certificates
	I1019 17:16:34.173142  144876 provision.go:84] configureAuth start
	I1019 17:16:34.173241  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:34.195876  144876 provision.go:143] copyHostCerts
	I1019 17:16:34.195918  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:34.195951  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:16:34.195961  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:34.196049  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:16:34.196126  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:34.196148  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:16:34.196163  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:34.196191  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:16:34.196235  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:34.196254  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:16:34.196259  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:34.196289  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:16:34.196338  144876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-386165 san=[127.0.0.1 192.168.85.2 force-systemd-env-386165 localhost minikube]
	I1019 17:16:35.104283  144876 provision.go:177] copyRemoteCerts
	I1019 17:16:35.104357  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:35.104427  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.125858  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:35.232645  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1019 17:16:35.232706  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1019 17:16:35.259207  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1019 17:16:35.259271  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:16:35.288197  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1019 17:16:35.288276  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:35.315637  144876 provision.go:87] duration metric: took 1.142467847s to configureAuth
	I1019 17:16:35.315664  144876 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:35.315881  144876 config.go:182] Loaded profile config "force-systemd-env-386165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:35.316014  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.339409  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:35.339722  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:35.339742  144876 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:35.708275  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:35.708298  144876 machine.go:97] duration metric: took 2.28105391s to provisionDockerMachine
	I1019 17:16:35.708309  144876 client.go:174] duration metric: took 9.159590297s to LocalClient.Create
	I1019 17:16:35.708340  144876 start.go:167] duration metric: took 9.159655413s to libmachine.API.Create "force-systemd-env-386165"
	I1019 17:16:35.708358  144876 start.go:293] postStartSetup for "force-systemd-env-386165" (driver="docker")
	I1019 17:16:35.708370  144876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:35.708447  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:35.708508  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.738726  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:35.859609  144876 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:35.863519  144876 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:35.863546  144876 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:35.863558  144876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:16:35.863609  144876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:16:35.863683  144876 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:16:35.863689  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> /etc/ssl/certs/41112.pem
	I1019 17:16:35.863792  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:35.875631  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:35.910590  144876 start.go:296] duration metric: took 202.213743ms for postStartSetup
	I1019 17:16:35.911048  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:35.941110  144876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json ...
	I1019 17:16:35.941372  144876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:35.941411  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.977900  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:36.088087  144876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:36.096099  144876 start.go:128] duration metric: took 9.550820985s to createHost
	I1019 17:16:36.096121  144876 start.go:83] releasing machines lock for "force-systemd-env-386165", held for 9.550934496s
	I1019 17:16:36.096188  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:36.118951  144876 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:36.119027  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:36.119263  144876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:36.119321  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:36.149704  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:36.160996  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:34.499497  144554 cli_runner.go:164] Run: docker network inspect pause-752547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:34.540166  144554 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:34.544831  144554 kubeadm.go:884] updating cluster {Name:pause-752547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:34.544965  144554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:34.545018  144554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:34.592699  144554 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:34.592720  144554 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:34.592775  144554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:34.640677  144554 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:34.640699  144554 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:34.640709  144554 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:34.640833  144554 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-752547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:34.640917  144554 ssh_runner.go:195] Run: crio config
	I1019 17:16:34.724535  144554 cni.go:84] Creating CNI manager for ""
	I1019 17:16:34.724559  144554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:34.724581  144554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:16:34.724605  144554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-752547 NodeName:pause-752547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:34.724774  144554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-752547"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:34.724934  144554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:34.736678  144554 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:34.736765  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:34.745017  144554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1019 17:16:34.766725  144554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:34.783896  144554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 17:16:34.799740  144554 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:34.803811  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:34.964673  144554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:34.977840  144554 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547 for IP: 192.168.76.2
	I1019 17:16:34.977858  144554 certs.go:195] generating shared ca certs ...
	I1019 17:16:34.977874  144554 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:34.978013  144554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:16:34.978053  144554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:16:34.978059  144554 certs.go:257] generating profile certs ...
	I1019 17:16:34.978136  144554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key
	I1019 17:16:34.978199  144554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.key.20454def
	I1019 17:16:34.978239  144554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.key
	I1019 17:16:34.978340  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:16:34.978366  144554 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:34.978379  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:16:34.978404  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:34.978426  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:34.978447  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:34.978486  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:34.979063  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:35.001221  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:35.024239  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:35.064627  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:16:35.083938  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:16:35.108496  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:16:35.132074  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:35.154317  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:35.178042  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:35.216072  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:16:35.273928  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:16:35.342933  144554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:35.378336  144554 ssh_runner.go:195] Run: openssl version
	I1019 17:16:35.395187  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:16:35.430574  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.450679  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.450744  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.573315  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:35.604942  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:35.623091  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.635492  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.635592  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.770132  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:35.804855  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:16:35.821202  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.831344  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.831424  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.909005  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:35.939421  144554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:35.950219  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:16:36.021502  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:16:36.076518  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:16:36.231203  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:16:36.294169  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:16:36.355675  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:16:36.419827  144554 kubeadm.go:401] StartCluster: {Name:pause-752547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:36.419952  144554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:36.420013  144554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:36.499104  144554 cri.go:89] found id: "07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83"
	I1019 17:16:36.499171  144554 cri.go:89] found id: "0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0"
	I1019 17:16:36.499201  144554 cri.go:89] found id: "b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2"
	I1019 17:16:36.499219  144554 cri.go:89] found id: "b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb"
	I1019 17:16:36.499238  144554 cri.go:89] found id: "8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82"
	I1019 17:16:36.499264  144554 cri.go:89] found id: "94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e"
	I1019 17:16:36.499290  144554 cri.go:89] found id: "bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7"
	I1019 17:16:36.499306  144554 cri.go:89] found id: "6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a"
	I1019 17:16:36.499323  144554 cri.go:89] found id: "334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac"
	I1019 17:16:36.499366  144554 cri.go:89] found id: "4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d"
	I1019 17:16:36.499393  144554 cri.go:89] found id: "47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0"
	I1019 17:16:36.499419  144554 cri.go:89] found id: "ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2"
	I1019 17:16:36.499447  144554 cri.go:89] found id: "3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997"
	I1019 17:16:36.499454  144554 cri.go:89] found id: "94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c"
	I1019 17:16:36.499457  144554 cri.go:89] found id: ""
	I1019 17:16:36.499510  144554 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:16:36.592250  144554 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:36.592350  144554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:36.615980  144554 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:16:36.615996  144554 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:16:36.616050  144554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:16:36.646629  144554 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:16:36.647166  144554 kubeconfig.go:125] found "pause-752547" server: "https://192.168.76.2:8443"
	I1019 17:16:36.647755  144554 kapi.go:59] client config for pause-752547: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key", CAFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21202b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:16:36.648217  144554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 17:16:36.648230  144554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 17:16:36.648235  144554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 17:16:36.648241  144554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 17:16:36.648245  144554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 17:16:36.648639  144554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:16:36.660829  144554 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:16:36.660863  144554 kubeadm.go:602] duration metric: took 44.860822ms to restartPrimaryControlPlane
	I1019 17:16:36.660894  144554 kubeadm.go:403] duration metric: took 241.076425ms to StartCluster
	I1019 17:16:36.660917  144554 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:36.660999  144554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:16:36.661641  144554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:36.661910  144554 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:36.662307  144554 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:36.662380  144554 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:36.667706  144554 out.go:179] * Enabled addons: 
	I1019 17:16:36.667734  144554 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:36.270889  144876 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:36.392275  144876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:36.471650  144876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:36.486138  144876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:36.486261  144876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:36.517891  144876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:16:36.517960  144876 start.go:496] detecting cgroup driver to use...
	I1019 17:16:36.517992  144876 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1019 17:16:36.518069  144876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:36.540157  144876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:36.558366  144876 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:36.558485  144876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:36.582745  144876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:36.616887  144876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:36.851442  144876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:37.056381  144876 docker.go:234] disabling docker service ...
	I1019 17:16:37.056491  144876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:37.099477  144876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:37.116879  144876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:37.322911  144876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:37.528486  144876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:37.545868  144876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:37.564422  144876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:37.564489  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.574054  144876 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:16:37.574120  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.583132  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.591550  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.599875  144876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:37.607530  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.615912  144876 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.628613  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.637188  144876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:37.645014  144876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:37.655080  144876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:37.856177  144876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:38.093887  144876 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:38.094005  144876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:38.099328  144876 start.go:564] Will wait 60s for crictl version
	I1019 17:16:38.099438  144876 ssh_runner.go:195] Run: which crictl
	I1019 17:16:38.103095  144876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:38.159754  144876 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:38.159881  144876 ssh_runner.go:195] Run: crio --version
	I1019 17:16:38.223665  144876 ssh_runner.go:195] Run: crio --version
	I1019 17:16:38.280781  144876 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:36.671014  144554 addons.go:515] duration metric: took 8.691246ms for enable addons: enabled=[]
	I1019 17:16:36.671057  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:37.020124  144554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:37.040875  144554 node_ready.go:35] waiting up to 6m0s for node "pause-752547" to be "Ready" ...
	I1019 17:16:38.283556  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:38.304600  144876 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:38.310969  144876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.327843  144876 kubeadm.go:884] updating cluster {Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:38.327967  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.328019  144876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.387135  144876 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.387161  144876 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:38.387997  144876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.434497  144876 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.434522  144876 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:38.434530  144876 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:38.434626  144876 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-386165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:38.434719  144876 ssh_runner.go:195] Run: crio config
	I1019 17:16:38.549364  144876 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.549435  144876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.549467  144876 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:16:38.549515  144876 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-386165 NodeName:force-systemd-env-386165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:38.549679  144876 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-386165"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:38.549784  144876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:38.560722  144876 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:38.560835  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:38.569482  144876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1019 17:16:38.593415  144876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:38.630953  144876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1019 17:16:38.648979  144876 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:38.654339  144876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.670275  144876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:38.871924  144876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:38.904035  144876 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165 for IP: 192.168.85.2
	I1019 17:16:38.904056  144876 certs.go:195] generating shared ca certs ...
	I1019 17:16:38.904072  144876 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:38.904255  144876 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:16:38.904320  144876 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:16:38.904334  144876 certs.go:257] generating profile certs ...
	I1019 17:16:38.904404  144876 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key
	I1019 17:16:38.904422  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt with IP's: []
	I1019 17:16:39.104950  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt ...
	I1019 17:16:39.104981  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt: {Name:mkd6779e747eccbe3e78bd040b63457f325a62c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.105186  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key ...
	I1019 17:16:39.105205  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key: {Name:mk02a35b15a399172032d9128548461410cbffdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.105327  144876 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e
	I1019 17:16:39.105348  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:16:39.600909  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e ...
	I1019 17:16:39.600941  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e: {Name:mkd866d8775775d398b5578cba21fdc5b180dd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.601174  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e ...
	I1019 17:16:39.601191  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e: {Name:mk851a7cda86e6d4bef40c63ef44abba6296f2fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.601304  144876 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt
	I1019 17:16:39.601404  144876 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key
	I1019 17:16:39.601504  144876 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key
	I1019 17:16:39.601526  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt with IP's: []
	I1019 17:16:40.614466  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt ...
	I1019 17:16:40.614499  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt: {Name:mk0398eb88ae309dafe50044b7616ecf769c0e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:40.614725  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key ...
	I1019 17:16:40.614742  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key: {Name:mk293eaabdb7a2b6ba00c1fcd773e11110ca6c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:40.614858  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1019 17:16:40.614895  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1019 17:16:40.614915  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1019 17:16:40.614936  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1019 17:16:40.614951  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1019 17:16:40.614993  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1019 17:16:40.615012  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1019 17:16:40.615027  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1019 17:16:40.615089  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:16:40.615143  144876 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:40.615159  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:16:40.615185  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:40.615236  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:40.615271  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:40.615333  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:40.615379  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem -> /usr/share/ca-certificates/4111.pem
	I1019 17:16:40.615407  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> /usr/share/ca-certificates/41112.pem
	I1019 17:16:40.615426  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:40.615959  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:40.661337  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:40.702994  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:40.735962  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:16:40.767771  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1019 17:16:40.799028  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:16:40.829101  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:40.859791  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:40.882403  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:16:40.916904  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:16:40.951535  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:40.984818  144876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:41.012537  144876 ssh_runner.go:195] Run: openssl version
	I1019 17:16:41.024252  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:16:41.037359  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.041808  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.041908  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.103400  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:41.111590  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:41.122716  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.127067  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.127159  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.170832  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:41.179348  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:16:41.188588  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.193050  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.193148  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.237933  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:41.246464  144876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:41.251046  144876 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:16:41.251130  144876 kubeadm.go:401] StartCluster: {Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:41.251219  144876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:41.251317  144876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:41.296816  144876 cri.go:89] found id: ""
	I1019 17:16:41.296918  144876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:41.309431  144876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:16:41.322016  144876 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:16:41.322113  144876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:16:41.336654  144876 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:16:41.336676  144876 kubeadm.go:158] found existing configuration files:
	
	I1019 17:16:41.336758  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:16:41.350183  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:16:41.350271  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:16:41.366109  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:16:41.378695  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:16:41.378789  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:16:41.396529  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:16:41.412071  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:16:41.412177  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:16:41.424578  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:16:41.441109  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:16:41.441210  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:16:41.454020  144876 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:16:41.522386  144876 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:16:41.527014  144876 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:16:41.608574  144876 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:16:41.608691  144876 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:16:41.608769  144876 kubeadm.go:319] OS: Linux
	I1019 17:16:41.608844  144876 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:16:41.608940  144876 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:16:41.609032  144876 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:16:41.609109  144876 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:16:41.609187  144876 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:16:41.609273  144876 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:16:41.609366  144876 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:16:41.609452  144876 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:16:41.609525  144876 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:16:41.759064  144876 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:16:41.759208  144876 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:16:41.759329  144876 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:16:41.778925  144876 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:16:42.443223  144554 node_ready.go:49] node "pause-752547" is "Ready"
	I1019 17:16:42.443254  144554 node_ready.go:38] duration metric: took 5.402338036s for node "pause-752547" to be "Ready" ...
	I1019 17:16:42.443267  144554 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:42.443326  144554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:42.463693  144554 api_server.go:72] duration metric: took 5.801743174s to wait for apiserver process to appear ...
	I1019 17:16:42.463720  144554 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:42.463740  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.572856  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:42.572950  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:42.964525  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.979294  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:42.979325  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:43.463847  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:43.486736  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:43.486813  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:43.964394  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:43.983466  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:43.983544  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:41.782329  144876 out.go:252]   - Generating certificates and keys ...
	I1019 17:16:41.782476  144876 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:16:41.782595  144876 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:16:42.153691  144876 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:16:42.881460  144876 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:16:44.090729  144876 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:16:44.455713  144876 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:16:44.520620  144876 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:16:44.520772  144876 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-386165 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:16:45.659010  144876 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:16:45.659634  144876 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-386165 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:16:45.967615  144876 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:16:44.464286  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:44.483227  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:16:44.487553  144554 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:44.487575  144554 api_server.go:131] duration metric: took 2.023847727s to wait for apiserver health ...
	I1019 17:16:44.487585  144554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:44.496309  144554 system_pods.go:59] 7 kube-system pods found
	I1019 17:16:44.496336  144554 system_pods.go:61] "coredns-66bc5c9577-fmhl6" [43eda531-cfb2-4771-bb86-16a49fefe7fb] Running
	I1019 17:16:44.496342  144554 system_pods.go:61] "etcd-pause-752547" [d6f4969b-8fb6-4b27-88c3-3e1f6e043d63] Running
	I1019 17:16:44.496346  144554 system_pods.go:61] "kindnet-5z6kw" [b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4] Running
	I1019 17:16:44.496351  144554 system_pods.go:61] "kube-apiserver-pause-752547" [451e7db6-d7e4-4247-9971-f3ba4fdbbcb7] Running
	I1019 17:16:44.496361  144554 system_pods.go:61] "kube-controller-manager-pause-752547" [33731318-b561-4c38-b33d-a21fc5c52ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:44.496366  144554 system_pods.go:61] "kube-proxy-5t82h" [7ae7f5b6-768e-4958-ab63-4851df32c123] Running
	I1019 17:16:44.496373  144554 system_pods.go:61] "kube-scheduler-pause-752547" [fde42862-4f3c-4f64-99c6-af8d842aaec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:44.496378  144554 system_pods.go:74] duration metric: took 8.788773ms to wait for pod list to return data ...
	I1019 17:16:44.496388  144554 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:44.506850  144554 default_sa.go:45] found service account: "default"
	I1019 17:16:44.506871  144554 default_sa.go:55] duration metric: took 10.477697ms for default service account to be created ...
	I1019 17:16:44.506880  144554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:16:44.509791  144554 system_pods.go:86] 7 kube-system pods found
	I1019 17:16:44.509851  144554 system_pods.go:89] "coredns-66bc5c9577-fmhl6" [43eda531-cfb2-4771-bb86-16a49fefe7fb] Running
	I1019 17:16:44.509872  144554 system_pods.go:89] "etcd-pause-752547" [d6f4969b-8fb6-4b27-88c3-3e1f6e043d63] Running
	I1019 17:16:44.509891  144554 system_pods.go:89] "kindnet-5z6kw" [b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4] Running
	I1019 17:16:44.509930  144554 system_pods.go:89] "kube-apiserver-pause-752547" [451e7db6-d7e4-4247-9971-f3ba4fdbbcb7] Running
	I1019 17:16:44.509958  144554 system_pods.go:89] "kube-controller-manager-pause-752547" [33731318-b561-4c38-b33d-a21fc5c52ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:44.509981  144554 system_pods.go:89] "kube-proxy-5t82h" [7ae7f5b6-768e-4958-ab63-4851df32c123] Running
	I1019 17:16:44.510016  144554 system_pods.go:89] "kube-scheduler-pause-752547" [fde42862-4f3c-4f64-99c6-af8d842aaec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:44.510042  144554 system_pods.go:126] duration metric: took 3.15519ms to wait for k8s-apps to be running ...
	I1019 17:16:44.510063  144554 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:16:44.510149  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:44.536999  144554 system_svc.go:56] duration metric: took 26.926103ms WaitForService to wait for kubelet
	I1019 17:16:44.537024  144554 kubeadm.go:587] duration metric: took 7.875079343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:44.537041  144554 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:44.546853  144554 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:16:44.546941  144554 node_conditions.go:123] node cpu capacity is 2
	I1019 17:16:44.546968  144554 node_conditions.go:105] duration metric: took 9.921122ms to run NodePressure ...
	I1019 17:16:44.546994  144554 start.go:242] waiting for startup goroutines ...
	I1019 17:16:44.547032  144554 start.go:247] waiting for cluster config update ...
	I1019 17:16:44.547056  144554 start.go:256] writing updated cluster config ...
	I1019 17:16:44.547443  144554 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:44.550993  144554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:44.551629  144554 kapi.go:59] client config for pause-752547: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key", CAFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21202b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:16:44.560597  144554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fmhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.572529  144554 pod_ready.go:94] pod "coredns-66bc5c9577-fmhl6" is "Ready"
	I1019 17:16:44.572611  144554 pod_ready.go:86] duration metric: took 11.939041ms for pod "coredns-66bc5c9577-fmhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.580430  144554 pod_ready.go:83] waiting for pod "etcd-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.592998  144554 pod_ready.go:94] pod "etcd-pause-752547" is "Ready"
	I1019 17:16:44.593069  144554 pod_ready.go:86] duration metric: took 12.566598ms for pod "etcd-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.596363  144554 pod_ready.go:83] waiting for pod "kube-apiserver-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.605151  144554 pod_ready.go:94] pod "kube-apiserver-pause-752547" is "Ready"
	I1019 17:16:44.605227  144554 pod_ready.go:86] duration metric: took 8.794878ms for pod "kube-apiserver-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.612381  144554 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:46.626915  144554 pod_ready.go:94] pod "kube-controller-manager-pause-752547" is "Ready"
	I1019 17:16:46.626955  144554 pod_ready.go:86] duration metric: took 2.014490355s for pod "kube-controller-manager-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:46.755336  144554 pod_ready.go:83] waiting for pod "kube-proxy-5t82h" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:47.155070  144554 pod_ready.go:94] pod "kube-proxy-5t82h" is "Ready"
	I1019 17:16:47.155099  144554 pod_ready.go:86] duration metric: took 399.732785ms for pod "kube-proxy-5t82h" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:47.355749  144554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:48.156033  144554 pod_ready.go:94] pod "kube-scheduler-pause-752547" is "Ready"
	I1019 17:16:48.156078  144554 pod_ready.go:86] duration metric: took 800.301349ms for pod "kube-scheduler-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:48.156091  144554 pod_ready.go:40] duration metric: took 3.605001305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:48.233292  144554 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:16:48.236663  144554 out.go:179] * Done! kubectl is now configured to use "pause-752547" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.659127056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.732637259Z" level=info msg="Created container b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2: kube-system/kindnet-5z6kw/kindnet-cni" id=0d28d40b-1c32-4105-8472-ee2391451250 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.734087494Z" level=info msg="Starting container: b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2" id=61dbd5eb-04a2-43a3-a710-d27c23d00fb5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.73695437Z" level=info msg="Started container" PID=2353 containerID=b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2 description=kube-system/kindnet-5z6kw/kindnet-cni id=61dbd5eb-04a2-43a3-a710-d27c23d00fb5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=62dc86861fb08cbbe8a933c2746b94aaac23ce2d0588697e3f2cebb325108b79
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.786779421Z" level=info msg="Created container 07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83: kube-system/coredns-66bc5c9577-fmhl6/coredns" id=e5cf0414-4785-452b-8b9f-8022129db909 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.789384773Z" level=info msg="Starting container: 07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83" id=77697df0-6404-495c-9f86-ce8c59af82ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.794868768Z" level=info msg="Started container" PID=2368 containerID=07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83 description=kube-system/coredns-66bc5c9577-fmhl6/coredns id=77697df0-6404-495c-9f86-ce8c59af82ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ea5ec8e996c8d63af46483aeec9496a07892f6a303abf109226e3e27374cd77
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.124959032Z" level=info msg="Created container 0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0: kube-system/kube-proxy-5t82h/kube-proxy" id=3c1fc3a2-b9a4-489a-a4e3-49a49a82ba84 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.134731616Z" level=info msg="Starting container: 0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0" id=ad84ae6b-8f89-4149-afb7-014219eac519 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.146170649Z" level=info msg="Started container" PID=2363 containerID=0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0 description=kube-system/kube-proxy-5t82h/kube-proxy id=ad84ae6b-8f89-4149-afb7-014219eac519 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38a087e1bd4894631e6f7e33cba60db2ca50542568694c43f227f1d3e18105f2
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.143559315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147853845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147890514Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147914612Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152833179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152865877Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152885422Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.162943316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.163034935Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.163074222Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.166902738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.1669393Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.166959838Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.17490474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.174944437Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	07974c9cd727f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   16 seconds ago       Running             coredns                   1                   3ea5ec8e996c8       coredns-66bc5c9577-fmhl6               kube-system
	0175839b90bb2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   16 seconds ago       Running             kube-proxy                1                   38a087e1bd489       kube-proxy-5t82h                       kube-system
	b83e5f99bc515       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   16 seconds ago       Running             kindnet-cni               1                   62dc86861fb08       kindnet-5z6kw                          kube-system
	b062a3965984c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago       Running             kube-apiserver            1                   2737e0eaa4b14       kube-apiserver-pause-752547            kube-system
	8a24b2b0a2c9c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago       Running             kube-scheduler            1                   1b8b30b176947       kube-scheduler-pause-752547            kube-system
	94209b2d27552       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago       Running             kube-controller-manager   1                   2090a5fb3744b       kube-controller-manager-pause-752547   kube-system
	bbf49db30ebb7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago       Running             etcd                      1                   43b666445b4b9       etcd-pause-752547                      kube-system
	6ee0aa7f3241a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   29 seconds ago       Exited              coredns                   0                   3ea5ec8e996c8       coredns-66bc5c9577-fmhl6               kube-system
	334cbbfd7bb38       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   62dc86861fb08       kindnet-5z6kw                          kube-system
	4da6e945ad26d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   38a087e1bd489       kube-proxy-5t82h                       kube-system
	47fd425298dfb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   43b666445b4b9       etcd-pause-752547                      kube-system
	ea03ca461af34       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1b8b30b176947       kube-scheduler-pause-752547            kube-system
	3fd9354b9af73       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2090a5fb3744b       kube-controller-manager-pause-752547   kube-system
	94ea94eabd155       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   2737e0eaa4b14       kube-apiserver-pause-752547            kube-system
	
	
	==> coredns [07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48674 - 13662 "HINFO IN 129519007070086537.9052892079714812723. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010519912s
	
	
	==> coredns [6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57887 - 8074 "HINFO IN 6553929530836081297.8498647211336222654. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021393616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-752547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-752547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-752547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-752547
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-752547
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                3d89df2f-46c5-46d7-b087-ef25fcc7a506
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fmhl6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     71s
	  kube-system                 etcd-pause-752547                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-5z6kw                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      72s
	  kube-system                 kube-apiserver-pause-752547             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-752547    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-5t82h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-pause-752547             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 70s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     87s (x8 over 87s)  kubelet          Node pause-752547 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node pause-752547 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node pause-752547 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   Starting                 77s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 77s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s                kubelet          Node pause-752547 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s                kubelet          Node pause-752547 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s                kubelet          Node pause-752547 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           73s                node-controller  Node pause-752547 event: Registered Node pause-752547 in Controller
	  Normal   NodeReady                30s                kubelet          Node pause-752547 status is now: NodeReady
	  Normal   RegisteredNode           4s                 node-controller  Node pause-752547 event: Registered Node pause-752547 in Controller
	
	
	==> dmesg <==
	[Oct19 16:52] overlayfs: idmapped layers are currently not supported
	[  +3.685397] overlayfs: idmapped layers are currently not supported
	[Oct19 16:53] overlayfs: idmapped layers are currently not supported
	[ +41.111710] overlayfs: idmapped layers are currently not supported
	[Oct19 16:55] overlayfs: idmapped layers are currently not supported
	[  +3.291702] overlayfs: idmapped layers are currently not supported
	[ +36.586345] overlayfs: idmapped layers are currently not supported
	[Oct19 16:56] overlayfs: idmapped layers are currently not supported
	[Oct19 16:58] overlayfs: idmapped layers are currently not supported
	[Oct19 17:02] overlayfs: idmapped layers are currently not supported
	[Oct19 17:03] overlayfs: idmapped layers are currently not supported
	[Oct19 17:04] overlayfs: idmapped layers are currently not supported
	[Oct19 17:05] overlayfs: idmapped layers are currently not supported
	[Oct19 17:06] overlayfs: idmapped layers are currently not supported
	[Oct19 17:07] overlayfs: idmapped layers are currently not supported
	[Oct19 17:08] overlayfs: idmapped layers are currently not supported
	[  +0.231072] overlayfs: idmapped layers are currently not supported
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0] <==
	{"level":"warn","ts":"2025-10-19T17:15:29.189403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.219350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.239507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.275101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.299099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.323835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.467564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43928","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:26.504134Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T17:16:26.504185Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-752547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-19T17:16:26.504273Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:16:26.655769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:16:26.655854Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.655879Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-19T17:16:26.655992Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-19T17:16:26.656012Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656255Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656301Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:16:26.656310Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656364Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:16:26.656371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.659181Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-19T17:16:26.659262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.659293Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-19T17:16:26.659307Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-752547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7] <==
	{"level":"warn","ts":"2025-10-19T17:16:39.166641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.224078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.265758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.297250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.324742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.372746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.403234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.461315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.570310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.576182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.631421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.679832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.715814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.826678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.886671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.947421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.985979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.120714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.143845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.201307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.295646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.302863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.360957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.410634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.526712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:16:52 up 59 min,  0 user,  load average: 5.37, 3.19, 2.42
	Linux pause-752547 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac] <==
	I1019 17:15:41.008602       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:41.010512       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:15:41.010657       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:41.010671       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:41.010685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:41.196123       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:41.196193       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:41.196203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:41.197136       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:16:11.196631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:16:11.196787       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:16:11.196893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:16:11.197012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:16:12.696670       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:12.696791       1 metrics.go:72] Registering metrics
	I1019 17:16:12.696942       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:21.202959       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:16:21.203014       1 main.go:301] handling current node
	
	
	==> kindnet [b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2] <==
	I1019 17:16:35.881481       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:35.904538       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:16:35.904755       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:35.904811       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:35.904851       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:36.162793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:36.170921       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:36.170963       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:36.171444       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:42.674609       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:42.674717       1 metrics.go:72] Registering metrics
	I1019 17:16:42.674803       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:46.143107       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:16:46.143233       1 main.go:301] handling current node
	
	
	==> kube-apiserver [94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c] <==
	W1019 17:16:26.528026       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528102       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528378       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528474       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528577       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528669       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528764       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.529038       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530027       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530861       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530974       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531031       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531100       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531163       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531221       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531274       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531348       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531421       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531518       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531896       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531977       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532019       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532052       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532084       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532100       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb] <==
	I1019 17:16:42.427934       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1019 17:16:42.492926       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:42.515572       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:16:42.515730       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:16:42.515885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:16:42.542686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:42.542782       1 policy_source.go:240] refreshing policies
	I1019 17:16:42.543894       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:16:42.546712       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:16:42.546833       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:16:42.546895       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:42.552618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:42.555890       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:42.566657       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:42.573362       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:42.577085       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:16:42.602710       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 17:16:42.649071       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:16:42.667716       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:43.135393       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:45.644164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:47.099932       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:47.296596       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:47.345704       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:47.397626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997] <==
	I1019 17:15:38.856805       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-752547" podCIDRs=["10.244.0.0/24"]
	I1019 17:15:38.868691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:38.868656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:15:38.868770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:15:38.868831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:38.856047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:15:38.861617       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:38.861647       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:38.870335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:15:38.874646       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:15:38.875136       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:15:38.890777       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:15:38.891864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:15:38.907407       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:38.907455       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:15:38.907489       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:15:38.907503       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:15:38.907529       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:15:38.913323       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:15:38.915281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:38.915314       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:15:38.915398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:15:38.933370       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:38.933526       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:23.838324       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e] <==
	I1019 17:16:47.073378       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:16:47.073535       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:47.073571       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:47.069445       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:16:47.075757       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:16:47.082295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:47.091941       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:47.092138       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:47.092751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:47.098601       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:47.098749       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:16:47.102011       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:16:47.104511       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:16:47.116367       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:47.118792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:47.118877       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:47.118910       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:16:47.125220       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:16:47.127839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:47.127964       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:47.129308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:47.139966       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:16:47.140055       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:16:47.140133       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-752547"
	I1019 17:16:47.140177       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0] <==
	I1019 17:16:41.283432       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:45.368606       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:45.470613       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:45.494691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:16:45.494812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:45.851981       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:45.852099       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:45.883916       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:45.884309       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:45.884536       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:45.885861       1 config.go:200] "Starting service config controller"
	I1019 17:16:45.891875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:45.892046       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:45.892077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:45.892115       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:45.892144       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:45.892860       1 config.go:309] "Starting node config controller"
	I1019 17:16:45.895954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:45.896043       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:45.992198       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:45.992455       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:45.992564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d] <==
	I1019 17:15:41.071956       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:41.336206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:41.438381       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:41.438508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:15:41.448975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:41.545337       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:41.545455       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:41.551538       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:41.551880       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:41.552079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:41.553349       1 config.go:200] "Starting service config controller"
	I1019 17:15:41.553551       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:41.553609       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:41.553658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:41.553702       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:41.553729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:41.555859       1 config.go:309] "Starting node config controller"
	I1019 17:15:41.562613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:41.562694       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:41.654125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:15:41.654123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:15:41.654155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82] <==
	I1019 17:16:40.679583       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:16:45.529424       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:45.529465       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:45.546842       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:45.547051       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:16:45.547113       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:16:45.547162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:45.552893       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.554793       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.553146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:45.554880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:45.648058       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:16:45.659102       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.659291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2] <==
	I1019 17:15:30.039946       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:15:33.359584       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:15:33.360849       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:33.366827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:15:33.366909       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:15:33.366939       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:15:33.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:15:33.373139       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:15:33.373175       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:15:33.383153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.383184       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.467757       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:15:33.483254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.483202       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:26.503123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 17:16:26.503150       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 17:16:26.503229       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 17:16:26.503266       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:26.503283       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:26.503299       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1019 17:16:26.503607       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 17:16:26.503629       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 17:16:35 pause-752547 kubelet[1310]: E1019 17:16:35.497281    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-fmhl6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.258583    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="605c53e70723f013bac6c727582e3b44" pod="kube-system/etcd-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.259577    1310 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:pause-752547\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.369670    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="8d19ac977bc6499011033b1f631b082a" pod="kube-system/kube-apiserver-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.430376    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-scheduler-pause-752547" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found, role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="7f55fc68ae235c75c793be76e9967fc5" pod="kube-system/kube-scheduler-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.454889    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-controller-manager-pause-752547" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="58e1ade8c75f1764e96c79c6a8a92a17" pod="kube-system/kube-controller-manager-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.471405    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-proxy-5t82h" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="7ae7f5b6-768e-4958-ab63-4851df32c123" pod="kube-system/kube-proxy-5t82h"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.514100    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5z6kw\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4" pod="kube-system/kindnet-5z6kw"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.525620    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fmhl6\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.594952    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fmhl6\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.604007    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="605c53e70723f013bac6c727582e3b44" pod="kube-system/etcd-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.617750    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="8d19ac977bc6499011033b1f631b082a" pod="kube-system/kube-apiserver-pause-752547"
	Oct 19 17:16:45 pause-752547 kubelet[1310]: W1019 17:16:45.291680    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 19 17:16:48 pause-752547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:16:48 pause-752547 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:16:48 pause-752547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-752547 -n pause-752547
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-752547 -n pause-752547: exit status 2 (447.572793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-752547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-752547
helpers_test.go:243: (dbg) docker inspect pause-752547:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40",
	        "Created": "2025-10-19T17:15:04.33945943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 135557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:15:04.418376088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/hostname",
	        "HostsPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/hosts",
	        "LogPath": "/var/lib/docker/containers/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40/ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40-json.log",
	        "Name": "/pause-752547",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-752547:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-752547",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ecacb72ceacbdf9118dabfa0acb3ac15259b6888e037e161ff7a858fee1d9a40",
	                "LowerDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed399fbaedd0ad374529faee86c873830536783f6b2e7b18e971900f49e0a46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-752547",
	                "Source": "/var/lib/docker/volumes/pause-752547/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-752547",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-752547",
	                "name.minikube.sigs.k8s.io": "pause-752547",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "563c3307a3ce22fa1cce6a276d686e75379d9e2397bcaabca1c6583f0b969450",
	            "SandboxKey": "/var/run/docker/netns/563c3307a3ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-752547": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:5c:72:b3:01:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d61af5095f6a50d5c2ca76f229911e6c43a43d0573728031002cc79109832a3f",
	                    "EndpointID": "6a0b4a6e32317415cd6d2e880eee9389cb6c8ee0c90e1f6f6b068c1122cc2a4e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-752547",
	                        "ecacb72ceacb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-752547 -n pause-752547
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-752547 -n pause-752547: exit status 2 (498.653516ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-752547 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-752547 logs -n 25: (2.164026526s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-953581 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ start   │ -p pause-752547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-752547             │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ ssh     │ -p cilium-953581 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status docker --all --full --no-pager                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat docker --no-pager                                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/docker/daemon.json                                                          │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo docker system info                                                                   │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cri-dockerd --version                                                                │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat containerd --no-pager                                                  │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo cat /etc/containerd/config.toml                                                      │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo containerd config dump                                                               │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl status crio --all --full --no-pager                                        │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo systemctl cat crio --no-pager                                                        │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ ssh     │ -p cilium-953581 sudo crio config                                                                          │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ delete  │ -p cilium-953581                                                                                           │ cilium-953581            │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p force-systemd-env-386165 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-386165 │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	│ pause   │ -p pause-752547 --alsologtostderr -v=5                                                                     │ pause-752547             │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:16:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:16:26.200504  144876 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:26.200701  144876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:26.200725  144876 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:26.200743  144876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:26.201018  144876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:16:26.201472  144876 out.go:368] Setting JSON to false
	I1019 17:16:26.202454  144876 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3534,"bootTime":1760890652,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:16:26.202669  144876 start.go:143] virtualization:  
	I1019 17:16:26.207847  144876 out.go:179] * [force-systemd-env-386165] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:16:26.211285  144876 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:26.211365  144876 notify.go:221] Checking for updates...
	I1019 17:16:26.217401  144876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:26.220461  144876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:16:26.223449  144876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:16:26.226473  144876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:16:26.229488  144876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1019 17:16:26.233023  144876 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:26.233123  144876 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:26.277596  144876 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:16:26.277811  144876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:26.397013  144876 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:16:26.372908646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:26.397143  144876 docker.go:319] overlay module found
	I1019 17:16:26.400388  144876 out.go:179] * Using the docker driver based on user configuration
	I1019 17:16:26.403220  144876 start.go:309] selected driver: docker
	I1019 17:16:26.403258  144876 start.go:930] validating driver "docker" against <nil>
	I1019 17:16:26.403272  144876 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:26.404218  144876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:26.494036  144876 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:16:26.483069382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:26.494217  144876 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:16:26.494459  144876 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 17:16:26.497668  144876 out.go:179] * Using Docker driver with root privileges
	I1019 17:16:26.501839  144876 cni.go:84] Creating CNI manager for ""
	I1019 17:16:26.501923  144876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:26.501933  144876 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:16:26.502197  144876 start.go:353] cluster config:
	{Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:26.507858  144876 out.go:179] * Starting "force-systemd-env-386165" primary control-plane node in "force-systemd-env-386165" cluster
	I1019 17:16:26.513545  144876 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:16:26.517764  144876 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:16:26.520194  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:26.520254  144876 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:16:26.520263  144876 cache.go:59] Caching tarball of preloaded images
	I1019 17:16:26.520361  144876 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:16:26.520370  144876 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:16:26.520494  144876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json ...
	I1019 17:16:26.520514  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json: {Name:mk022111d787195f02e6c57e7230af85b15122b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:26.520730  144876 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:16:26.545009  144876 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:16:26.545034  144876 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:16:26.545048  144876 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:16:26.545070  144876 start.go:360] acquireMachinesLock for force-systemd-env-386165: {Name:mkafa6f7a11b13b8d9ed92f31c974241a4f149dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:16:26.545174  144876 start.go:364] duration metric: took 88.165µs to acquireMachinesLock for "force-systemd-env-386165"
	I1019 17:16:26.545203  144876 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:26.545265  144876 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:16:24.644408  144554 out.go:252] * Updating the running docker "pause-752547" container ...
	I1019 17:16:24.644451  144554 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:24.644546  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:24.677216  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:24.677581  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:24.677595  144554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:24.842529  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-752547
	
	I1019 17:16:24.842591  144554 ubuntu.go:182] provisioning hostname "pause-752547"
	I1019 17:16:24.842663  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:24.865394  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:24.866031  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:24.866050  144554 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-752547 && echo "pause-752547" | sudo tee /etc/hostname
	I1019 17:16:25.042902  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-752547
	
	I1019 17:16:25.042990  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:25.072324  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:25.072738  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:25.072765  144554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-752547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-752547/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-752547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:25.254491  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:25.254527  144554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:16:25.254570  144554 ubuntu.go:190] setting up certificates
	I1019 17:16:25.254581  144554 provision.go:84] configureAuth start
	I1019 17:16:25.254639  144554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-752547
	I1019 17:16:25.279568  144554 provision.go:143] copyHostCerts
	I1019 17:16:25.279646  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:16:25.279665  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:25.279746  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:16:25.279857  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:16:25.279868  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:25.279894  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:16:25.279962  144554 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:16:25.279973  144554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:25.280001  144554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:16:25.280055  144554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.pause-752547 san=[127.0.0.1 192.168.76.2 localhost minikube pause-752547]
	I1019 17:16:26.075170  144554 provision.go:177] copyRemoteCerts
	I1019 17:16:26.075317  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:26.075379  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:26.095289  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:26.212638  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:26.233513  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 17:16:26.261982  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:16:26.284016  144554 provision.go:87] duration metric: took 1.029413792s to configureAuth
	I1019 17:16:26.284040  144554 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:26.284253  144554 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:26.284357  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:26.306202  144554 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:26.306504  144554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1019 17:16:26.306525  144554 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:26.548435  144876 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:16:26.548669  144876 start.go:159] libmachine.API.Create for "force-systemd-env-386165" (driver="docker")
	I1019 17:16:26.548706  144876 client.go:171] LocalClient.Create starting
	I1019 17:16:26.548782  144876 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:16:26.548821  144876 main.go:143] libmachine: Decoding PEM data...
	I1019 17:16:26.548842  144876 main.go:143] libmachine: Parsing certificate...
	I1019 17:16:26.548898  144876 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:16:26.548915  144876 main.go:143] libmachine: Decoding PEM data...
	I1019 17:16:26.548924  144876 main.go:143] libmachine: Parsing certificate...
	I1019 17:16:26.549283  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:16:26.569795  144876 cli_runner.go:211] docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:16:26.569889  144876 network_create.go:284] running [docker network inspect force-systemd-env-386165] to gather additional debugging logs...
	I1019 17:16:26.569912  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165
	W1019 17:16:26.589410  144876 cli_runner.go:211] docker network inspect force-systemd-env-386165 returned with exit code 1
	I1019 17:16:26.589454  144876 network_create.go:287] error running [docker network inspect force-systemd-env-386165]: docker network inspect force-systemd-env-386165: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-386165 not found
	I1019 17:16:26.589468  144876 network_create.go:289] output of [docker network inspect force-systemd-env-386165]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-386165 not found
	
	** /stderr **
	I1019 17:16:26.589575  144876 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:26.607165  144876 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:16:26.607436  144876 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:16:26.607716  144876 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:16:26.608007  144876 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d61af5095f6a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:20:1a:dc:35:6d} reservation:<nil>}
	I1019 17:16:26.608383  144876 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c9700}
	I1019 17:16:26.608405  144876 network_create.go:124] attempt to create docker network force-systemd-env-386165 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:16:26.608472  144876 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-386165 force-systemd-env-386165
	I1019 17:16:26.677047  144876 network_create.go:108] docker network force-systemd-env-386165 192.168.85.0/24 created
	I1019 17:16:26.677080  144876 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-386165" container
	I1019 17:16:26.677169  144876 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:16:26.694316  144876 cli_runner.go:164] Run: docker volume create force-systemd-env-386165 --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:16:26.712616  144876 oci.go:103] Successfully created a docker volume force-systemd-env-386165
	I1019 17:16:26.712704  144876 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-386165-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --entrypoint /usr/bin/test -v force-systemd-env-386165:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:16:27.203407  144876 oci.go:107] Successfully prepared a docker volume force-systemd-env-386165
	I1019 17:16:27.203452  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:27.203471  144876 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:16:27.203554  144876 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-386165:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:16:31.686565  144554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:31.686587  144554 machine.go:97] duration metric: took 7.042127092s to provisionDockerMachine
	I1019 17:16:31.686598  144554 start.go:293] postStartSetup for "pause-752547" (driver="docker")
	I1019 17:16:31.686609  144554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:31.686678  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:31.686726  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:31.714321  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:31.822400  144554 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:31.825874  144554 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:31.825905  144554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:31.825916  144554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:16:31.825968  144554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:16:31.826213  144554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:16:31.826342  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:31.835102  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:31.868085  144554 start.go:296] duration metric: took 181.471488ms for postStartSetup
	I1019 17:16:31.868184  144554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:31.868231  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:31.888675  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.009599  144554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:32.018060  144554 fix.go:56] duration metric: took 7.420531967s for fixHost
	I1019 17:16:32.018084  144554 start.go:83] releasing machines lock for "pause-752547", held for 7.420588303s
	I1019 17:16:32.018152  144554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-752547
	I1019 17:16:32.040845  144554 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:32.040900  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:32.041143  144554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:32.041200  144554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752547
	I1019 17:16:32.065706  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.070609  144554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/pause-752547/id_rsa Username:docker}
	I1019 17:16:32.305856  144554 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:32.313413  144554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:32.416115  144554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:32.426660  144554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:32.426726  144554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:32.448820  144554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:16:32.448846  144554 start.go:496] detecting cgroup driver to use...
	I1019 17:16:32.448878  144554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:16:32.448943  144554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:32.471481  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:32.501872  144554 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:32.501937  144554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:32.522194  144554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:32.540603  144554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:32.839145  144554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:33.128937  144554 docker.go:234] disabling docker service ...
	I1019 17:16:33.129013  144554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:33.164688  144554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:33.213780  144554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:33.554487  144554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:33.763758  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:33.781345  144554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:33.802279  144554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:33.802348  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.823975  144554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:16:33.824053  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.835148  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.851110  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.863287  144554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:33.877100  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.888929  144554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.900888  144554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:33.914668  144554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:33.928523  144554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:33.941370  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:34.159564  144554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:34.370820  144554 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:34.370894  144554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:34.375201  144554 start.go:564] Will wait 60s for crictl version
	I1019 17:16:34.375257  144554 ssh_runner.go:195] Run: which crictl
	I1019 17:16:34.379411  144554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:34.418111  144554 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:34.418196  144554 ssh_runner.go:195] Run: crio --version
	I1019 17:16:34.456428  144554 ssh_runner.go:195] Run: crio --version
	I1019 17:16:34.496646  144554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:31.675253  144876 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-386165:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471662091s)
	I1019 17:16:31.675283  144876 kic.go:203] duration metric: took 4.471808882s to extract preloaded images to volume ...
	W1019 17:16:31.675427  144876 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:16:31.675534  144876 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:16:31.772075  144876 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-386165 --name force-systemd-env-386165 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-386165 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-386165 --network force-systemd-env-386165 --ip 192.168.85.2 --volume force-systemd-env-386165:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:16:32.177592  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Running}}
	I1019 17:16:32.206452  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:32.234697  144876 cli_runner.go:164] Run: docker exec force-systemd-env-386165 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:16:32.297740  144876 oci.go:144] the created container "force-systemd-env-386165" has a running status.
	I1019 17:16:32.297767  144876 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa...
	I1019 17:16:33.270420  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1019 17:16:33.270488  144876 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:16:33.298126  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:33.326980  144876 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:16:33.327000  144876 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-386165 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:16:33.396393  144876 cli_runner.go:164] Run: docker container inspect force-systemd-env-386165 --format={{.State.Status}}
	I1019 17:16:33.427219  144876 machine.go:94] provisionDockerMachine start ...
	I1019 17:16:33.427330  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.457830  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.458173  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.458183  144876 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:16:33.690299  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-386165
	
	I1019 17:16:33.690326  144876 ubuntu.go:182] provisioning hostname "force-systemd-env-386165"
	I1019 17:16:33.690414  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.720262  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.720568  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.720593  144876 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-386165 && echo "force-systemd-env-386165" | sudo tee /etc/hostname
	I1019 17:16:33.947801  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-386165
	
	I1019 17:16:33.947882  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:33.980537  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:33.980849  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:33.980876  144876 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-386165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-386165/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-386165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:16:34.172988  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:16:34.173079  144876 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:16:34.173113  144876 ubuntu.go:190] setting up certificates
	I1019 17:16:34.173142  144876 provision.go:84] configureAuth start
	I1019 17:16:34.173241  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:34.195876  144876 provision.go:143] copyHostCerts
	I1019 17:16:34.195918  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:34.195951  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:16:34.195961  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:16:34.196049  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:16:34.196126  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:34.196148  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:16:34.196163  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:16:34.196191  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:16:34.196235  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:34.196254  144876 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:16:34.196259  144876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:16:34.196289  144876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:16:34.196338  144876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-386165 san=[127.0.0.1 192.168.85.2 force-systemd-env-386165 localhost minikube]
	I1019 17:16:35.104283  144876 provision.go:177] copyRemoteCerts
	I1019 17:16:35.104357  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:16:35.104427  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.125858  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:35.232645  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1019 17:16:35.232706  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1019 17:16:35.259207  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1019 17:16:35.259271  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:16:35.288197  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1019 17:16:35.288276  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:16:35.315637  144876 provision.go:87] duration metric: took 1.142467847s to configureAuth
	I1019 17:16:35.315664  144876 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:16:35.315881  144876 config.go:182] Loaded profile config "force-systemd-env-386165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:35.316014  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.339409  144876 main.go:143] libmachine: Using SSH client type: native
	I1019 17:16:35.339722  144876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1019 17:16:35.339742  144876 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:16:35.708275  144876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:16:35.708298  144876 machine.go:97] duration metric: took 2.28105391s to provisionDockerMachine
	I1019 17:16:35.708309  144876 client.go:174] duration metric: took 9.159590297s to LocalClient.Create
	I1019 17:16:35.708340  144876 start.go:167] duration metric: took 9.159655413s to libmachine.API.Create "force-systemd-env-386165"
	I1019 17:16:35.708358  144876 start.go:293] postStartSetup for "force-systemd-env-386165" (driver="docker")
	I1019 17:16:35.708370  144876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:16:35.708447  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:16:35.708508  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.738726  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:35.859609  144876 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:16:35.863519  144876 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:16:35.863546  144876 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:16:35.863558  144876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:16:35.863609  144876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:16:35.863683  144876 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:16:35.863689  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> /etc/ssl/certs/41112.pem
	I1019 17:16:35.863792  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:16:35.875631  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:35.910590  144876 start.go:296] duration metric: took 202.213743ms for postStartSetup
	I1019 17:16:35.911048  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:35.941110  144876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/config.json ...
	I1019 17:16:35.941372  144876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:16:35.941411  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:35.977900  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:36.088087  144876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:16:36.096099  144876 start.go:128] duration metric: took 9.550820985s to createHost
	I1019 17:16:36.096121  144876 start.go:83] releasing machines lock for "force-systemd-env-386165", held for 9.550934496s
	I1019 17:16:36.096188  144876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-386165
	I1019 17:16:36.118951  144876 ssh_runner.go:195] Run: cat /version.json
	I1019 17:16:36.119027  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:36.119263  144876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:16:36.119321  144876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-386165
	I1019 17:16:36.149704  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:36.160996  144876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/force-systemd-env-386165/id_rsa Username:docker}
	I1019 17:16:34.499497  144554 cli_runner.go:164] Run: docker network inspect pause-752547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:34.540166  144554 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:34.544831  144554 kubeadm.go:884] updating cluster {Name:pause-752547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:34.544965  144554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:34.545018  144554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:34.592699  144554 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:34.592720  144554 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:34.592775  144554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:34.640677  144554 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:34.640699  144554 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:34.640709  144554 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:34.640833  144554 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-752547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:34.640917  144554 ssh_runner.go:195] Run: crio config
	I1019 17:16:34.724535  144554 cni.go:84] Creating CNI manager for ""
	I1019 17:16:34.724559  144554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:34.724581  144554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:16:34.724605  144554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-752547 NodeName:pause-752547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:34.724774  144554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-752547"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:34.724934  144554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:34.736678  144554 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:34.736765  144554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:34.745017  144554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1019 17:16:34.766725  144554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:34.783896  144554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 17:16:34.799740  144554 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:34.803811  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:34.964673  144554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:34.977840  144554 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547 for IP: 192.168.76.2
	I1019 17:16:34.977858  144554 certs.go:195] generating shared ca certs ...
	I1019 17:16:34.977874  144554 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:34.978013  144554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:16:34.978053  144554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:16:34.978059  144554 certs.go:257] generating profile certs ...
	I1019 17:16:34.978136  144554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key
	I1019 17:16:34.978199  144554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.key.20454def
	I1019 17:16:34.978239  144554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.key
	I1019 17:16:34.978340  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:16:34.978366  144554 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:34.978379  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:16:34.978404  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:34.978426  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:34.978447  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:34.978486  144554 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:34.979063  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:35.001221  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:35.024239  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:35.064627  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:16:35.083938  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:16:35.108496  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:16:35.132074  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:35.154317  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:35.178042  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:35.216072  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:16:35.273928  144554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:16:35.342933  144554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:35.378336  144554 ssh_runner.go:195] Run: openssl version
	I1019 17:16:35.395187  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:16:35.430574  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.450679  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.450744  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:16:35.573315  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:35.604942  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:35.623091  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.635492  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.635592  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:35.770132  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:35.804855  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:16:35.821202  144554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.831344  144554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.831424  144554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:16:35.909005  144554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:35.939421  144554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:35.950219  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:16:36.021502  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:16:36.076518  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:16:36.231203  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:16:36.294169  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:16:36.355675  144554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:16:36.419827  144554 kubeadm.go:401] StartCluster: {Name:pause-752547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-752547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:36.419952  144554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:36.420013  144554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:36.499104  144554 cri.go:89] found id: "07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83"
	I1019 17:16:36.499171  144554 cri.go:89] found id: "0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0"
	I1019 17:16:36.499201  144554 cri.go:89] found id: "b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2"
	I1019 17:16:36.499219  144554 cri.go:89] found id: "b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb"
	I1019 17:16:36.499238  144554 cri.go:89] found id: "8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82"
	I1019 17:16:36.499264  144554 cri.go:89] found id: "94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e"
	I1019 17:16:36.499290  144554 cri.go:89] found id: "bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7"
	I1019 17:16:36.499306  144554 cri.go:89] found id: "6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a"
	I1019 17:16:36.499323  144554 cri.go:89] found id: "334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac"
	I1019 17:16:36.499366  144554 cri.go:89] found id: "4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d"
	I1019 17:16:36.499393  144554 cri.go:89] found id: "47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0"
	I1019 17:16:36.499419  144554 cri.go:89] found id: "ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2"
	I1019 17:16:36.499447  144554 cri.go:89] found id: "3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997"
	I1019 17:16:36.499454  144554 cri.go:89] found id: "94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c"
	I1019 17:16:36.499457  144554 cri.go:89] found id: ""
	I1019 17:16:36.499510  144554 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:16:36.592250  144554 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:16:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:16:36.592350  144554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:36.615980  144554 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:16:36.615996  144554 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:16:36.616050  144554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:16:36.646629  144554 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:16:36.647166  144554 kubeconfig.go:125] found "pause-752547" server: "https://192.168.76.2:8443"
	I1019 17:16:36.647755  144554 kapi.go:59] client config for pause-752547: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key", CAFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21202b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:16:36.648217  144554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 17:16:36.648230  144554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 17:16:36.648235  144554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 17:16:36.648241  144554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 17:16:36.648245  144554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 17:16:36.648639  144554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:16:36.660829  144554 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:16:36.660863  144554 kubeadm.go:602] duration metric: took 44.860822ms to restartPrimaryControlPlane
	I1019 17:16:36.660894  144554 kubeadm.go:403] duration metric: took 241.076425ms to StartCluster
	I1019 17:16:36.660917  144554 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:36.660999  144554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:16:36.661641  144554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:36.661910  144554 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:16:36.662307  144554 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:16:36.662380  144554 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:36.667706  144554 out.go:179] * Enabled addons: 
	I1019 17:16:36.667734  144554 out.go:179] * Verifying Kubernetes components...
	I1019 17:16:36.270889  144876 ssh_runner.go:195] Run: systemctl --version
	I1019 17:16:36.392275  144876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:16:36.471650  144876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:16:36.486138  144876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:16:36.486261  144876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:16:36.517891  144876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:16:36.517960  144876 start.go:496] detecting cgroup driver to use...
	I1019 17:16:36.517992  144876 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1019 17:16:36.518069  144876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:16:36.540157  144876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:16:36.558366  144876 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:16:36.558485  144876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:16:36.582745  144876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:16:36.616887  144876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:16:36.851442  144876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:16:37.056381  144876 docker.go:234] disabling docker service ...
	I1019 17:16:37.056491  144876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:16:37.099477  144876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:16:37.116879  144876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:16:37.322911  144876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:16:37.528486  144876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:16:37.545868  144876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:16:37.564422  144876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:16:37.564489  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.574054  144876 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 17:16:37.574120  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.583132  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.591550  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.599875  144876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:16:37.607530  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.615912  144876 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.628613  144876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:16:37.637188  144876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:16:37.645014  144876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:16:37.655080  144876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:37.856177  144876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:16:38.093887  144876 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:16:38.094005  144876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:16:38.099328  144876 start.go:564] Will wait 60s for crictl version
	I1019 17:16:38.099438  144876 ssh_runner.go:195] Run: which crictl
	I1019 17:16:38.103095  144876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:16:38.159754  144876 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:16:38.159881  144876 ssh_runner.go:195] Run: crio --version
	I1019 17:16:38.223665  144876 ssh_runner.go:195] Run: crio --version
	I1019 17:16:38.280781  144876 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:16:36.671014  144554 addons.go:515] duration metric: took 8.691246ms for enable addons: enabled=[]
	I1019 17:16:36.671057  144554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:37.020124  144554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:37.040875  144554 node_ready.go:35] waiting up to 6m0s for node "pause-752547" to be "Ready" ...
	I1019 17:16:38.283556  144876 cli_runner.go:164] Run: docker network inspect force-systemd-env-386165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:16:38.304600  144876 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:16:38.310969  144876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.327843  144876 kubeadm.go:884] updating cluster {Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:16:38.327967  144876 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:16:38.328019  144876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.387135  144876 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.387161  144876 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:16:38.387997  144876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:16:38.434497  144876 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:16:38.434522  144876 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:16:38.434530  144876 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:16:38.434626  144876 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-386165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:16:38.434719  144876 ssh_runner.go:195] Run: crio config
	I1019 17:16:38.549364  144876 cni.go:84] Creating CNI manager for ""
	I1019 17:16:38.549435  144876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:16:38.549467  144876 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:16:38.549515  144876 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-386165 NodeName:force-systemd-env-386165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:16:38.549679  144876 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-386165"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:16:38.549784  144876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:16:38.560722  144876 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:16:38.560835  144876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:16:38.569482  144876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1019 17:16:38.593415  144876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:16:38.630953  144876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1019 17:16:38.648979  144876 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:16:38.654339  144876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:16:38.670275  144876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:16:38.871924  144876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:16:38.904035  144876 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165 for IP: 192.168.85.2
	I1019 17:16:38.904056  144876 certs.go:195] generating shared ca certs ...
	I1019 17:16:38.904072  144876 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:38.904255  144876 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:16:38.904320  144876 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:16:38.904334  144876 certs.go:257] generating profile certs ...
	I1019 17:16:38.904404  144876 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key
	I1019 17:16:38.904422  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt with IP's: []
	I1019 17:16:39.104950  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt ...
	I1019 17:16:39.104981  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.crt: {Name:mkd6779e747eccbe3e78bd040b63457f325a62c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.105186  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key ...
	I1019 17:16:39.105205  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/client.key: {Name:mk02a35b15a399172032d9128548461410cbffdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.105327  144876 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e
	I1019 17:16:39.105348  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:16:39.600909  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e ...
	I1019 17:16:39.600941  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e: {Name:mkd866d8775775d398b5578cba21fdc5b180dd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.601174  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e ...
	I1019 17:16:39.601191  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e: {Name:mk851a7cda86e6d4bef40c63ef44abba6296f2fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:39.601304  144876 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt.3659d64e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt
	I1019 17:16:39.601404  144876 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key.3659d64e -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key
	I1019 17:16:39.601504  144876 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key
	I1019 17:16:39.601526  144876 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt with IP's: []
	I1019 17:16:40.614466  144876 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt ...
	I1019 17:16:40.614499  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt: {Name:mk0398eb88ae309dafe50044b7616ecf769c0e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:40.614725  144876 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key ...
	I1019 17:16:40.614742  144876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key: {Name:mk293eaabdb7a2b6ba00c1fcd773e11110ca6c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:16:40.614858  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1019 17:16:40.614895  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1019 17:16:40.614915  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1019 17:16:40.614936  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1019 17:16:40.614951  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1019 17:16:40.614993  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1019 17:16:40.615012  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1019 17:16:40.615027  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1019 17:16:40.615089  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:16:40.615143  144876 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:16:40.615159  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:16:40.615185  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:16:40.615236  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:16:40.615271  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:16:40.615333  144876 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:16:40.615379  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem -> /usr/share/ca-certificates/4111.pem
	I1019 17:16:40.615407  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> /usr/share/ca-certificates/41112.pem
	I1019 17:16:40.615426  144876 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:40.615959  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:16:40.661337  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:16:40.702994  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:16:40.735962  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:16:40.767771  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1019 17:16:40.799028  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:16:40.829101  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:16:40.859791  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/force-systemd-env-386165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:16:40.882403  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:16:40.916904  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:16:40.951535  144876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:16:40.984818  144876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:16:41.012537  144876 ssh_runner.go:195] Run: openssl version
	I1019 17:16:41.024252  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:16:41.037359  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.041808  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.041908  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:16:41.103400  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:16:41.111590  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:16:41.122716  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.127067  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.127159  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:16:41.170832  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:16:41.179348  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:16:41.188588  144876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.193050  144876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.193148  144876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:16:41.237933  144876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:16:41.246464  144876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:16:41.251046  144876 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:16:41.251130  144876 kubeadm.go:401] StartCluster: {Name:force-systemd-env-386165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-386165 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:16:41.251219  144876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:16:41.251317  144876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:16:41.296816  144876 cri.go:89] found id: ""
	I1019 17:16:41.296918  144876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:16:41.309431  144876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:16:41.322016  144876 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:16:41.322113  144876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:16:41.336654  144876 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:16:41.336676  144876 kubeadm.go:158] found existing configuration files:
	
	I1019 17:16:41.336758  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:16:41.350183  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:16:41.350271  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:16:41.366109  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:16:41.378695  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:16:41.378789  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:16:41.396529  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:16:41.412071  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:16:41.412177  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:16:41.424578  144876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:16:41.441109  144876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:16:41.441210  144876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:16:41.454020  144876 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:16:41.522386  144876 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:16:41.527014  144876 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:16:41.608574  144876 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:16:41.608691  144876 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:16:41.608769  144876 kubeadm.go:319] OS: Linux
	I1019 17:16:41.608844  144876 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:16:41.608940  144876 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:16:41.609032  144876 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:16:41.609109  144876 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:16:41.609187  144876 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:16:41.609273  144876 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:16:41.609366  144876 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:16:41.609452  144876 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:16:41.609525  144876 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:16:41.759064  144876 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:16:41.759208  144876 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:16:41.759329  144876 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:16:41.778925  144876 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:16:42.443223  144554 node_ready.go:49] node "pause-752547" is "Ready"
	I1019 17:16:42.443254  144554 node_ready.go:38] duration metric: took 5.402338036s for node "pause-752547" to be "Ready" ...
	I1019 17:16:42.443267  144554 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:16:42.443326  144554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:16:42.463693  144554 api_server.go:72] duration metric: took 5.801743174s to wait for apiserver process to appear ...
	I1019 17:16:42.463720  144554 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:16:42.463740  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.572856  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:42.572950  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:42.964525  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:42.979294  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:42.979325  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:43.463847  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:43.486736  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:43.486813  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:43.964394  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:43.983466  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:16:43.983544  144554 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:16:41.782329  144876 out.go:252]   - Generating certificates and keys ...
	I1019 17:16:41.782476  144876 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:16:41.782595  144876 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:16:42.153691  144876 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:16:42.881460  144876 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:16:44.090729  144876 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:16:44.455713  144876 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:16:44.520620  144876 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:16:44.520772  144876 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-386165 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:16:45.659010  144876 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:16:45.659634  144876 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-386165 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:16:45.967615  144876 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:16:44.464286  144554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:16:44.483227  144554 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:16:44.487553  144554 api_server.go:141] control plane version: v1.34.1
	I1019 17:16:44.487575  144554 api_server.go:131] duration metric: took 2.023847727s to wait for apiserver health ...
	I1019 17:16:44.487585  144554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:16:44.496309  144554 system_pods.go:59] 7 kube-system pods found
	I1019 17:16:44.496336  144554 system_pods.go:61] "coredns-66bc5c9577-fmhl6" [43eda531-cfb2-4771-bb86-16a49fefe7fb] Running
	I1019 17:16:44.496342  144554 system_pods.go:61] "etcd-pause-752547" [d6f4969b-8fb6-4b27-88c3-3e1f6e043d63] Running
	I1019 17:16:44.496346  144554 system_pods.go:61] "kindnet-5z6kw" [b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4] Running
	I1019 17:16:44.496351  144554 system_pods.go:61] "kube-apiserver-pause-752547" [451e7db6-d7e4-4247-9971-f3ba4fdbbcb7] Running
	I1019 17:16:44.496361  144554 system_pods.go:61] "kube-controller-manager-pause-752547" [33731318-b561-4c38-b33d-a21fc5c52ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:44.496366  144554 system_pods.go:61] "kube-proxy-5t82h" [7ae7f5b6-768e-4958-ab63-4851df32c123] Running
	I1019 17:16:44.496373  144554 system_pods.go:61] "kube-scheduler-pause-752547" [fde42862-4f3c-4f64-99c6-af8d842aaec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:44.496378  144554 system_pods.go:74] duration metric: took 8.788773ms to wait for pod list to return data ...
	I1019 17:16:44.496388  144554 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:16:44.506850  144554 default_sa.go:45] found service account: "default"
	I1019 17:16:44.506871  144554 default_sa.go:55] duration metric: took 10.477697ms for default service account to be created ...
	I1019 17:16:44.506880  144554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:16:44.509791  144554 system_pods.go:86] 7 kube-system pods found
	I1019 17:16:44.509851  144554 system_pods.go:89] "coredns-66bc5c9577-fmhl6" [43eda531-cfb2-4771-bb86-16a49fefe7fb] Running
	I1019 17:16:44.509872  144554 system_pods.go:89] "etcd-pause-752547" [d6f4969b-8fb6-4b27-88c3-3e1f6e043d63] Running
	I1019 17:16:44.509891  144554 system_pods.go:89] "kindnet-5z6kw" [b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4] Running
	I1019 17:16:44.509930  144554 system_pods.go:89] "kube-apiserver-pause-752547" [451e7db6-d7e4-4247-9971-f3ba4fdbbcb7] Running
	I1019 17:16:44.509958  144554 system_pods.go:89] "kube-controller-manager-pause-752547" [33731318-b561-4c38-b33d-a21fc5c52ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:16:44.509981  144554 system_pods.go:89] "kube-proxy-5t82h" [7ae7f5b6-768e-4958-ab63-4851df32c123] Running
	I1019 17:16:44.510016  144554 system_pods.go:89] "kube-scheduler-pause-752547" [fde42862-4f3c-4f64-99c6-af8d842aaec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:16:44.510042  144554 system_pods.go:126] duration metric: took 3.15519ms to wait for k8s-apps to be running ...
	I1019 17:16:44.510063  144554 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:16:44.510149  144554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:16:44.536999  144554 system_svc.go:56] duration metric: took 26.926103ms WaitForService to wait for kubelet
	I1019 17:16:44.537024  144554 kubeadm.go:587] duration metric: took 7.875079343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:16:44.537041  144554 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:16:44.546853  144554 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:16:44.546941  144554 node_conditions.go:123] node cpu capacity is 2
	I1019 17:16:44.546968  144554 node_conditions.go:105] duration metric: took 9.921122ms to run NodePressure ...
	I1019 17:16:44.546994  144554 start.go:242] waiting for startup goroutines ...
	I1019 17:16:44.547032  144554 start.go:247] waiting for cluster config update ...
	I1019 17:16:44.547056  144554 start.go:256] writing updated cluster config ...
	I1019 17:16:44.547443  144554 ssh_runner.go:195] Run: rm -f paused
	I1019 17:16:44.550993  144554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:44.551629  144554 kapi.go:59] client config for pause-752547: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key", CAFile:"/home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21202b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:16:44.560597  144554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fmhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.572529  144554 pod_ready.go:94] pod "coredns-66bc5c9577-fmhl6" is "Ready"
	I1019 17:16:44.572611  144554 pod_ready.go:86] duration metric: took 11.939041ms for pod "coredns-66bc5c9577-fmhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.580430  144554 pod_ready.go:83] waiting for pod "etcd-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.592998  144554 pod_ready.go:94] pod "etcd-pause-752547" is "Ready"
	I1019 17:16:44.593069  144554 pod_ready.go:86] duration metric: took 12.566598ms for pod "etcd-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.596363  144554 pod_ready.go:83] waiting for pod "kube-apiserver-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.605151  144554 pod_ready.go:94] pod "kube-apiserver-pause-752547" is "Ready"
	I1019 17:16:44.605227  144554 pod_ready.go:86] duration metric: took 8.794878ms for pod "kube-apiserver-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:44.612381  144554 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:46.626915  144554 pod_ready.go:94] pod "kube-controller-manager-pause-752547" is "Ready"
	I1019 17:16:46.626955  144554 pod_ready.go:86] duration metric: took 2.014490355s for pod "kube-controller-manager-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:46.755336  144554 pod_ready.go:83] waiting for pod "kube-proxy-5t82h" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:47.155070  144554 pod_ready.go:94] pod "kube-proxy-5t82h" is "Ready"
	I1019 17:16:47.155099  144554 pod_ready.go:86] duration metric: took 399.732785ms for pod "kube-proxy-5t82h" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:47.355749  144554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:48.156033  144554 pod_ready.go:94] pod "kube-scheduler-pause-752547" is "Ready"
	I1019 17:16:48.156078  144554 pod_ready.go:86] duration metric: took 800.301349ms for pod "kube-scheduler-pause-752547" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:16:48.156091  144554 pod_ready.go:40] duration metric: took 3.605001305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:16:48.233292  144554 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:16:48.236663  144554 out.go:179] * Done! kubectl is now configured to use "pause-752547" cluster and "default" namespace by default
	I1019 17:16:46.264377  144876 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:16:46.880638  144876 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:16:46.880942  144876 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:16:47.516905  144876 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:16:47.677319  144876 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:16:48.144334  144876 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:16:49.092336  144876 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:16:51.273618  144876 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:16:51.274444  144876 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:16:51.277240  144876 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.659127056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.732637259Z" level=info msg="Created container b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2: kube-system/kindnet-5z6kw/kindnet-cni" id=0d28d40b-1c32-4105-8472-ee2391451250 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.734087494Z" level=info msg="Starting container: b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2" id=61dbd5eb-04a2-43a3-a710-d27c23d00fb5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.73695437Z" level=info msg="Started container" PID=2353 containerID=b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2 description=kube-system/kindnet-5z6kw/kindnet-cni id=61dbd5eb-04a2-43a3-a710-d27c23d00fb5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=62dc86861fb08cbbe8a933c2746b94aaac23ce2d0588697e3f2cebb325108b79
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.786779421Z" level=info msg="Created container 07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83: kube-system/coredns-66bc5c9577-fmhl6/coredns" id=e5cf0414-4785-452b-8b9f-8022129db909 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.789384773Z" level=info msg="Starting container: 07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83" id=77697df0-6404-495c-9f86-ce8c59af82ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:35 pause-752547 crio[2099]: time="2025-10-19T17:16:35.794868768Z" level=info msg="Started container" PID=2368 containerID=07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83 description=kube-system/coredns-66bc5c9577-fmhl6/coredns id=77697df0-6404-495c-9f86-ce8c59af82ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ea5ec8e996c8d63af46483aeec9496a07892f6a303abf109226e3e27374cd77
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.124959032Z" level=info msg="Created container 0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0: kube-system/kube-proxy-5t82h/kube-proxy" id=3c1fc3a2-b9a4-489a-a4e3-49a49a82ba84 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.134731616Z" level=info msg="Starting container: 0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0" id=ad84ae6b-8f89-4149-afb7-014219eac519 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:16:36 pause-752547 crio[2099]: time="2025-10-19T17:16:36.146170649Z" level=info msg="Started container" PID=2363 containerID=0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0 description=kube-system/kube-proxy-5t82h/kube-proxy id=ad84ae6b-8f89-4149-afb7-014219eac519 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38a087e1bd4894631e6f7e33cba60db2ca50542568694c43f227f1d3e18105f2
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.143559315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147853845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147890514Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.147914612Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152833179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152865877Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.152885422Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.162943316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.163034935Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.163074222Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.166902738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.1669393Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.166959838Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.17490474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:16:46 pause-752547 crio[2099]: time="2025-10-19T17:16:46.174944437Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	07974c9cd727f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   19 seconds ago       Running             coredns                   1                   3ea5ec8e996c8       coredns-66bc5c9577-fmhl6               kube-system
	0175839b90bb2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   19 seconds ago       Running             kube-proxy                1                   38a087e1bd489       kube-proxy-5t82h                       kube-system
	b83e5f99bc515       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago       Running             kindnet-cni               1                   62dc86861fb08       kindnet-5z6kw                          kube-system
	b062a3965984c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago       Running             kube-apiserver            1                   2737e0eaa4b14       kube-apiserver-pause-752547            kube-system
	8a24b2b0a2c9c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago       Running             kube-scheduler            1                   1b8b30b176947       kube-scheduler-pause-752547            kube-system
	94209b2d27552       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago       Running             kube-controller-manager   1                   2090a5fb3744b       kube-controller-manager-pause-752547   kube-system
	bbf49db30ebb7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago       Running             etcd                      1                   43b666445b4b9       etcd-pause-752547                      kube-system
	6ee0aa7f3241a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   3ea5ec8e996c8       coredns-66bc5c9577-fmhl6               kube-system
	334cbbfd7bb38       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   62dc86861fb08       kindnet-5z6kw                          kube-system
	4da6e945ad26d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   38a087e1bd489       kube-proxy-5t82h                       kube-system
	47fd425298dfb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   43b666445b4b9       etcd-pause-752547                      kube-system
	ea03ca461af34       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1b8b30b176947       kube-scheduler-pause-752547            kube-system
	3fd9354b9af73       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2090a5fb3744b       kube-controller-manager-pause-752547   kube-system
	94ea94eabd155       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   2737e0eaa4b14       kube-apiserver-pause-752547            kube-system
	
	
	==> coredns [07974c9cd727f413e93d54c084c60831fa00e052fda6e58ea7e8db8c69bdeb83] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48674 - 13662 "HINFO IN 129519007070086537.9052892079714812723. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010519912s
	
	
	==> coredns [6ee0aa7f3241ab005481f75cf8b244cc6d96f2b782648dcd0e1f6d6ddd50106a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57887 - 8074 "HINFO IN 6553929530836081297.8498647211336222654. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021393616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-752547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-752547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-752547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_15_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:15:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-752547
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:16:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:16:21 +0000   Sun, 19 Oct 2025 17:16:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-752547
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                3d89df2f-46c5-46d7-b087-ef25fcc7a506
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fmhl6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-752547                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-5z6kw                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-752547             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-752547    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-5t82h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-752547             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 9s                 kube-proxy       
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-752547 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-752547 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-752547 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-752547 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-752547 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-752547 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-752547 event: Registered Node pause-752547 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-752547 status is now: NodeReady
	  Normal   RegisteredNode           8s                 node-controller  Node pause-752547 event: Registered Node pause-752547 in Controller
	
	
	==> dmesg <==
	[  +3.685397] overlayfs: idmapped layers are currently not supported
	[Oct19 16:53] overlayfs: idmapped layers are currently not supported
	[ +41.111710] overlayfs: idmapped layers are currently not supported
	[Oct19 16:55] overlayfs: idmapped layers are currently not supported
	[  +3.291702] overlayfs: idmapped layers are currently not supported
	[ +36.586345] overlayfs: idmapped layers are currently not supported
	[Oct19 16:56] overlayfs: idmapped layers are currently not supported
	[Oct19 16:58] overlayfs: idmapped layers are currently not supported
	[Oct19 17:02] overlayfs: idmapped layers are currently not supported
	[Oct19 17:03] overlayfs: idmapped layers are currently not supported
	[Oct19 17:04] overlayfs: idmapped layers are currently not supported
	[Oct19 17:05] overlayfs: idmapped layers are currently not supported
	[Oct19 17:06] overlayfs: idmapped layers are currently not supported
	[Oct19 17:07] overlayfs: idmapped layers are currently not supported
	[Oct19 17:08] overlayfs: idmapped layers are currently not supported
	[  +0.231072] overlayfs: idmapped layers are currently not supported
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [47fd425298dfb82b464ea2631993ccdbafec7010573692d5712f9a87a01f16f0] <==
	{"level":"warn","ts":"2025-10-19T17:15:29.189403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.219350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.239507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.275101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.299099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.323835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:15:29.467564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43928","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:16:26.504134Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T17:16:26.504185Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-752547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-19T17:16:26.504273Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:16:26.655769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:16:26.655854Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.655879Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-19T17:16:26.655992Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-19T17:16:26.656012Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656255Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656301Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:16:26.656310Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:16:26.656364Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:16:26.656371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.659181Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-19T17:16:26.659262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:16:26.659293Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-19T17:16:26.659307Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-752547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [bbf49db30ebb7d6d396c472885ef43fe613819b7c230af8d3fe337f3fe609fa7] <==
	{"level":"warn","ts":"2025-10-19T17:16:39.166641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.224078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.265758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.297250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.324742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.372746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.403234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.461315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.570310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.576182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.631421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.679832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.715814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.826678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.886671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.947421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:39.985979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.120714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.143845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.201307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.295646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.302863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.360957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.410634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:16:40.526712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:16:55 up 59 min,  0 user,  load average: 5.37, 3.19, 2.42
	Linux pause-752547 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [334cbbfd7bb38d91993a30dff7863196ac739f81e8e6849b96aba3bd922ddaac] <==
	I1019 17:15:41.008602       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:15:41.010512       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:15:41.010657       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:15:41.010671       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:15:41.010685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:15:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:15:41.196123       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:15:41.196193       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:15:41.196203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:15:41.197136       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:16:11.196631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:16:11.196787       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:16:11.196893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:16:11.197012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:16:12.696670       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:12.696791       1 metrics.go:72] Registering metrics
	I1019 17:16:12.696942       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:21.202959       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:16:21.203014       1 main.go:301] handling current node
	
	
	==> kindnet [b83e5f99bc515f92fabbc4a26790ade51f31ca51067a36bcf380757d8ed4a5f2] <==
	I1019 17:16:35.881481       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:16:35.904538       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:16:35.904755       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:16:35.904811       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:16:35.904851       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:16:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:16:36.162793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:16:36.170921       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:16:36.170963       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:16:36.171444       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:16:42.674609       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:16:42.674717       1 metrics.go:72] Registering metrics
	I1019 17:16:42.674803       1 controller.go:711] "Syncing nftables rules"
	I1019 17:16:46.143107       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:16:46.143233       1 main.go:301] handling current node
	
	
	==> kube-apiserver [94ea94eabd15553243a43b3b9125ed085c7958afe81d37108c820fadd358a52c] <==
	W1019 17:16:26.528026       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528102       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528378       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528474       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528577       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528669       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.528764       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.529038       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530027       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530861       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.530974       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531031       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531100       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531163       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531221       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531274       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531348       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531421       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531518       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531896       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.531977       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532019       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532052       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532084       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 17:16:26.532100       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b062a3965984c4cd7524d66035a8a2c2abcd865fca79cbffd9533f56e1948ecb] <==
	I1019 17:16:42.427934       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1019 17:16:42.492926       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:16:42.515572       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:16:42.515730       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:16:42.515885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:16:42.542686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:16:42.542782       1 policy_source.go:240] refreshing policies
	I1019 17:16:42.543894       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:16:42.546712       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:16:42.546833       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:16:42.546895       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:16:42.552618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:16:42.555890       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:16:42.566657       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:16:42.573362       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:16:42.577085       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:16:42.602710       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 17:16:42.649071       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:16:42.667716       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:16:43.135393       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:16:45.644164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:16:47.099932       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:16:47.296596       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:16:47.345704       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:16:47.397626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3fd9354b9af733751887463d963607f9345e24820435ad304bd0a19963b80997] <==
	I1019 17:15:38.856805       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-752547" podCIDRs=["10.244.0.0/24"]
	I1019 17:15:38.868691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:15:38.868656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:15:38.868770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:15:38.868831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:15:38.856047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:15:38.861617       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:15:38.861647       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:15:38.870335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:15:38.874646       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:15:38.875136       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:15:38.890777       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:15:38.891864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:15:38.907407       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:15:38.907455       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:15:38.907489       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:15:38.907503       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:15:38.907529       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:15:38.913323       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:15:38.915281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:38.915314       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:15:38.915398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:15:38.933370       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:15:38.933526       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:23.838324       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [94209b2d27552f9e8c63fa54400bcfb70580abf93c73e695e379ac43c413bb6e] <==
	I1019 17:16:47.073378       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:16:47.073535       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:16:47.073571       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:16:47.069445       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:16:47.075757       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:16:47.082295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:16:47.091941       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:16:47.092138       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:16:47.092751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:16:47.098601       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:16:47.098749       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:16:47.102011       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:16:47.104511       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:16:47.116367       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:47.118792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:47.118877       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:16:47.118910       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:16:47.125220       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:16:47.127839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:16:47.127964       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:16:47.129308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:16:47.139966       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:16:47.140055       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:16:47.140133       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-752547"
	I1019 17:16:47.140177       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [0175839b90bb2837b8d81a14b6a0c0f65c72ef95396d90c73cfdabe15e8ab8d0] <==
	I1019 17:16:41.283432       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:16:45.368606       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:16:45.470613       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:16:45.494691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:16:45.494812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:16:45.851981       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:16:45.852099       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:16:45.883916       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:16:45.884309       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:16:45.884536       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:45.885861       1 config.go:200] "Starting service config controller"
	I1019 17:16:45.891875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:16:45.892046       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:16:45.892077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:16:45.892115       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:16:45.892144       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:16:45.892860       1 config.go:309] "Starting node config controller"
	I1019 17:16:45.895954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:16:45.896043       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:16:45.992198       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:16:45.992455       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:16:45.992564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [4da6e945ad26d71d23fab266356135c9a32f167e61ea01537dc707875e6ce17d] <==
	I1019 17:15:41.071956       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:15:41.336206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:15:41.438381       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:15:41.438508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:15:41.448975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:15:41.545337       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:15:41.545455       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:15:41.551538       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:15:41.551880       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:15:41.552079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:41.553349       1 config.go:200] "Starting service config controller"
	I1019 17:15:41.553551       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:15:41.553609       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:15:41.553658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:15:41.553702       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:15:41.553729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:15:41.555859       1 config.go:309] "Starting node config controller"
	I1019 17:15:41.562613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:15:41.562694       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:15:41.654125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:15:41.654123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:15:41.654155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8a24b2b0a2c9c614c20987c20119908c64d441f8f029e558f32af2405c7f6e82] <==
	I1019 17:16:40.679583       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:16:45.529424       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:16:45.529465       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:16:45.546842       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:16:45.547051       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:16:45.547113       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:16:45.547162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:16:45.552893       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.554793       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.553146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:45.554880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:45.648058       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:16:45.659102       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:45.659291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ea03ca461af340c24dd1aa86c5a7ad19d30dae629f7e6a053f5747e9dd873fc2] <==
	I1019 17:15:30.039946       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:15:33.359584       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:15:33.360849       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:15:33.366827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:15:33.366909       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:15:33.366939       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:15:33.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:15:33.373139       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:15:33.373175       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:15:33.383153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.383184       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.467757       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:15:33.483254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:15:33.483202       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:26.503123       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 17:16:26.503150       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 17:16:26.503229       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 17:16:26.503266       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:16:26.503283       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:16:26.503299       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1019 17:16:26.503607       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 17:16:26.503629       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 17:16:35 pause-752547 kubelet[1310]: E1019 17:16:35.497281    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-fmhl6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.258583    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="605c53e70723f013bac6c727582e3b44" pod="kube-system/etcd-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.259577    1310 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:pause-752547\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.369670    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="8d19ac977bc6499011033b1f631b082a" pod="kube-system/kube-apiserver-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.430376    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-scheduler-pause-752547" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found, role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="7f55fc68ae235c75c793be76e9967fc5" pod="kube-system/kube-scheduler-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.454889    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-controller-manager-pause-752547" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="58e1ade8c75f1764e96c79c6a8a92a17" pod="kube-system/kube-controller-manager-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.471405    1310 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         pods "kube-proxy-5t82h" is forbidden: User "system:node:pause-752547" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-752547' and this object
	Oct 19 17:16:42 pause-752547 kubelet[1310]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 19 17:16:42 pause-752547 kubelet[1310]:  > podUID="7ae7f5b6-768e-4958-ab63-4851df32c123" pod="kube-system/kube-proxy-5t82h"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.514100    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5z6kw\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="b7a10ba9-dd39-4b6a-8fba-777d8bf9cdc4" pod="kube-system/kindnet-5z6kw"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.525620    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fmhl6\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.594952    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-fmhl6\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="43eda531-cfb2-4771-bb86-16a49fefe7fb" pod="kube-system/coredns-66bc5c9577-fmhl6"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.604007    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="605c53e70723f013bac6c727582e3b44" pod="kube-system/etcd-pause-752547"
	Oct 19 17:16:42 pause-752547 kubelet[1310]: E1019 17:16:42.617750    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-752547\" is forbidden: User \"system:node:pause-752547\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-752547' and this object" podUID="8d19ac977bc6499011033b1f631b082a" pod="kube-system/kube-apiserver-pause-752547"
	Oct 19 17:16:45 pause-752547 kubelet[1310]: W1019 17:16:45.291680    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 19 17:16:48 pause-752547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:16:48 pause-752547 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:16:48 pause-752547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-752547 -n pause-752547
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-752547 -n pause-752547: exit status 2 (597.513561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-752547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.727793ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:31:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-125363 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-125363 describe deploy/metrics-server -n kube-system: exit status 1 (91.816181ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-125363 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-125363
helpers_test.go:243: (dbg) docker inspect old-k8s-version-125363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	        "Created": "2025-10-19T17:30:37.268621175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 218521,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:30:37.338413225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hosts",
	        "LogPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18-json.log",
	        "Name": "/old-k8s-version-125363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-125363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-125363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	                "LowerDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-125363",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-125363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-125363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73713f3057bb6ebe25195be2a577889f020fc5714452a1e41658f6c7e4cf1180",
	            "SandboxKey": "/var/run/docker/netns/73713f3057bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-125363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:84:29:a0:ac:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c605d5ace27fd5383c607c72991f6fd31798e2bf8285be119b02bf86a3e7e1c",
	                    "EndpointID": "f6f215365a450f1b16b0f0c5daceb1392a1efa08bf17e84e8881c58ea87bd366",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-125363",
	                        "7cebf5ae65ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25: (1.727409167s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-953581 sudo systemctl status kubelet --all --full --no-pager                                                                       │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat kubelet --no-pager                                                                                       │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo journalctl -xeu kubelet --all --full --no-pager                                                                        │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/kubernetes/kubelet.conf                                                                                       │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /var/lib/kubelet/config.yaml                                                                                       │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status docker --all --full --no-pager                                                                        │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat docker --no-pager                                                                                        │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/docker/daemon.json                                                                                            │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo docker system info                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl status cri-docker --all --full --no-pager                                                                    │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat cri-docker --no-pager                                                                                    │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                               │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cri-dockerd --version                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status containerd --all --full --no-pager                                                                    │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                    │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                        │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                          │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                          │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                            │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:30:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:30:30.495497  217644 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:30:30.495652  217644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:30:30.495675  217644 out.go:374] Setting ErrFile to fd 2...
	I1019 17:30:30.495691  217644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:30:30.495974  217644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:30:30.496431  217644 out.go:368] Setting JSON to false
	I1019 17:30:30.497312  217644 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4378,"bootTime":1760890652,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:30:30.497385  217644 start.go:143] virtualization:  
	I1019 17:30:30.500812  217644 out.go:179] * [old-k8s-version-125363] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:30:30.504756  217644 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:30:30.504819  217644 notify.go:221] Checking for updates...
	I1019 17:30:30.510824  217644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:30:30.513673  217644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:30:30.516674  217644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:30:30.519544  217644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:30:30.522461  217644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:30:30.525970  217644 config.go:182] Loaded profile config "bridge-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:30:30.526126  217644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:30:30.555651  217644 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:30:30.555800  217644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:30:30.616354  217644 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:30:30.607040995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:30:30.616457  217644 docker.go:319] overlay module found
	I1019 17:30:30.619911  217644 out.go:179] * Using the docker driver based on user configuration
	I1019 17:30:30.622793  217644 start.go:309] selected driver: docker
	I1019 17:30:30.622813  217644 start.go:930] validating driver "docker" against <nil>
	I1019 17:30:30.622826  217644 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:30:30.623553  217644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:30:30.678288  217644 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:30:30.667560015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:30:30.678454  217644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:30:30.678708  217644 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:30:30.681723  217644 out.go:179] * Using Docker driver with root privileges
	I1019 17:30:30.684591  217644 cni.go:84] Creating CNI manager for ""
	I1019 17:30:30.684673  217644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:30:30.684688  217644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:30:30.684785  217644 start.go:353] cluster config:
	{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:30:30.688053  217644 out.go:179] * Starting "old-k8s-version-125363" primary control-plane node in "old-k8s-version-125363" cluster
	I1019 17:30:30.690908  217644 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:30:30.694043  217644 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:30:30.696970  217644 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:30:30.697023  217644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:30:30.697055  217644 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 17:30:30.697065  217644 cache.go:59] Caching tarball of preloaded images
	I1019 17:30:30.697166  217644 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:30:30.697175  217644 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 17:30:30.697285  217644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:30:30.697312  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json: {Name:mkeb83f789b02bf1ea06818c9a1dbd6863fa63bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:30.720148  217644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:30:30.720175  217644 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:30:30.720192  217644 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:30:30.720215  217644 start.go:360] acquireMachinesLock for old-k8s-version-125363: {Name:mkd08e65b205b510576dbfd42cd5fdbceaaa1817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:30:30.720330  217644 start.go:364] duration metric: took 94.099µs to acquireMachinesLock for "old-k8s-version-125363"
	I1019 17:30:30.720361  217644 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:30:30.720438  217644 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:30:27.628106  215318 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v bridge-953581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.779246452s)
	I1019 17:30:27.628143  215318 kic.go:203] duration metric: took 4.779383603s to extract preloaded images to volume ...
	W1019 17:30:27.628281  215318 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:30:27.628388  215318 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:30:27.738004  215318 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-953581 --name bridge-953581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-953581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-953581 --network bridge-953581 --ip 192.168.76.2 --volume bridge-953581:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:30:28.159029  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Running}}
	I1019 17:30:28.184166  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:30:28.208039  215318 cli_runner.go:164] Run: docker exec bridge-953581 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:30:28.278708  215318 oci.go:144] the created container "bridge-953581" has a running status.
	I1019 17:30:28.278743  215318 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa...
	I1019 17:30:28.569972  215318 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:30:28.595216  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:30:28.625537  215318 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:30:28.625571  215318 kic_runner.go:114] Args: [docker exec --privileged bridge-953581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:30:28.715626  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:30:28.735756  215318 machine.go:94] provisionDockerMachine start ...
	I1019 17:30:28.735845  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:28.772526  215318 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:28.772844  215318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1019 17:30:28.772853  215318 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:30:28.775132  215318 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:30:30.723801  217644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:30:30.724044  217644 start.go:159] libmachine.API.Create for "old-k8s-version-125363" (driver="docker")
	I1019 17:30:30.724084  217644 client.go:171] LocalClient.Create starting
	I1019 17:30:30.724171  217644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:30:30.724218  217644 main.go:143] libmachine: Decoding PEM data...
	I1019 17:30:30.724233  217644 main.go:143] libmachine: Parsing certificate...
	I1019 17:30:30.724298  217644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:30:30.724328  217644 main.go:143] libmachine: Decoding PEM data...
	I1019 17:30:30.724341  217644 main.go:143] libmachine: Parsing certificate...
	I1019 17:30:30.724713  217644 cli_runner.go:164] Run: docker network inspect old-k8s-version-125363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:30:30.740176  217644 cli_runner.go:211] docker network inspect old-k8s-version-125363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:30:30.740264  217644 network_create.go:284] running [docker network inspect old-k8s-version-125363] to gather additional debugging logs...
	I1019 17:30:30.740284  217644 cli_runner.go:164] Run: docker network inspect old-k8s-version-125363
	W1019 17:30:30.756453  217644 cli_runner.go:211] docker network inspect old-k8s-version-125363 returned with exit code 1
	I1019 17:30:30.756483  217644 network_create.go:287] error running [docker network inspect old-k8s-version-125363]: docker network inspect old-k8s-version-125363: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-125363 not found
	I1019 17:30:30.756496  217644 network_create.go:289] output of [docker network inspect old-k8s-version-125363]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-125363 not found
	
	** /stderr **
	I1019 17:30:30.756583  217644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:30:30.772567  217644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:30:30.772914  217644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:30:30.773255  217644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:30:30.773522  217644 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ee8ec2f8278 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:d0:5d:88:be:fb} reservation:<nil>}
	I1019 17:30:30.773976  217644 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3cff0}
	I1019 17:30:30.774001  217644 network_create.go:124] attempt to create docker network old-k8s-version-125363 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:30:30.774062  217644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-125363 old-k8s-version-125363
	I1019 17:30:30.832289  217644 network_create.go:108] docker network old-k8s-version-125363 192.168.85.0/24 created
	I1019 17:30:30.832319  217644 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-125363" container
	I1019 17:30:30.832392  217644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:30:30.848838  217644 cli_runner.go:164] Run: docker volume create old-k8s-version-125363 --label name.minikube.sigs.k8s.io=old-k8s-version-125363 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:30:30.866942  217644 oci.go:103] Successfully created a docker volume old-k8s-version-125363
	I1019 17:30:30.867029  217644 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-125363-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-125363 --entrypoint /usr/bin/test -v old-k8s-version-125363:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:30:31.412742  217644 oci.go:107] Successfully prepared a docker volume old-k8s-version-125363
	I1019 17:30:31.412799  217644 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:30:31.412818  217644 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:30:31.412886  217644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-125363:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:30:31.934201  215318 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-953581
	
	I1019 17:30:31.934228  215318 ubuntu.go:182] provisioning hostname "bridge-953581"
	I1019 17:30:31.934289  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:31.957447  215318 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:31.957778  215318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1019 17:30:31.957790  215318 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-953581 && echo "bridge-953581" | sudo tee /etc/hostname
	I1019 17:30:32.125627  215318 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-953581
	
	I1019 17:30:32.125803  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:32.147496  215318 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:32.147788  215318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1019 17:30:32.147804  215318 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-953581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-953581/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-953581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:30:32.311036  215318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:30:32.311129  215318 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:30:32.311231  215318 ubuntu.go:190] setting up certificates
	I1019 17:30:32.311269  215318 provision.go:84] configureAuth start
	I1019 17:30:32.311395  215318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-953581
	I1019 17:30:32.333372  215318 provision.go:143] copyHostCerts
	I1019 17:30:32.333449  215318 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:30:32.333459  215318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:30:32.333559  215318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:30:32.333694  215318 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:30:32.333701  215318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:30:32.333730  215318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:30:32.333879  215318 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:30:32.333884  215318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:30:32.333920  215318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:30:32.333976  215318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.bridge-953581 san=[127.0.0.1 192.168.76.2 bridge-953581 localhost minikube]
	I1019 17:30:32.721035  215318 provision.go:177] copyRemoteCerts
	I1019 17:30:32.721150  215318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:30:32.721214  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:32.740628  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:30:32.855651  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:30:32.876483  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 17:30:32.895990  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:30:32.914022  215318 provision.go:87] duration metric: took 602.708596ms to configureAuth
	I1019 17:30:32.914050  215318 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:30:32.914262  215318 config.go:182] Loaded profile config "bridge-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:30:32.914400  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:32.932374  215318 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:32.932687  215318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1019 17:30:32.932710  215318 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:30:33.266110  215318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:30:33.266134  215318 machine.go:97] duration metric: took 4.530360342s to provisionDockerMachine
	I1019 17:30:33.266143  215318 client.go:174] duration metric: took 11.286278961s to LocalClient.Create
	I1019 17:30:33.266167  215318 start.go:167] duration metric: took 11.286350938s to libmachine.API.Create "bridge-953581"
	I1019 17:30:33.266175  215318 start.go:293] postStartSetup for "bridge-953581" (driver="docker")
	I1019 17:30:33.266185  215318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:30:33.266257  215318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:30:33.266304  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:33.289368  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:30:33.395900  215318 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:30:33.400471  215318 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:30:33.400503  215318 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:30:33.400514  215318 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:30:33.400572  215318 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:30:33.400653  215318 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:30:33.400764  215318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:30:33.409653  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:30:33.430790  215318 start.go:296] duration metric: took 164.600906ms for postStartSetup
	I1019 17:30:33.431160  215318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-953581
	I1019 17:30:33.450186  215318 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/config.json ...
	I1019 17:30:33.450479  215318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:30:33.450525  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:33.473723  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:30:33.576111  215318 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:30:33.581751  215318 start.go:128] duration metric: took 11.605564869s to createHost
	I1019 17:30:33.581781  215318 start.go:83] releasing machines lock for "bridge-953581", held for 11.605701315s
	I1019 17:30:33.581857  215318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-953581
	I1019 17:30:33.600081  215318 ssh_runner.go:195] Run: cat /version.json
	I1019 17:30:33.600107  215318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:30:33.600141  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:33.600178  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:30:33.634625  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:30:33.644260  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:30:33.750892  215318 ssh_runner.go:195] Run: systemctl --version
	I1019 17:30:33.839222  215318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:30:33.882202  215318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:30:33.887652  215318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:30:33.887733  215318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:30:33.917387  215318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:30:33.917450  215318 start.go:496] detecting cgroup driver to use...
	I1019 17:30:33.917505  215318 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:30:33.917583  215318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:30:33.940133  215318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:30:33.954686  215318 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:30:33.954754  215318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:30:33.973981  215318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:30:34.002657  215318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:30:34.175022  215318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:30:34.348188  215318 docker.go:234] disabling docker service ...
	I1019 17:30:34.348298  215318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:30:34.377787  215318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:30:34.392764  215318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:30:34.543380  215318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:30:34.673381  215318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:30:34.688030  215318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:30:34.706460  215318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:30:34.706571  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.719422  215318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:30:34.719514  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.733995  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.747080  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.758461  215318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:30:34.767287  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.780526  215318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.796125  215318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:34.805654  215318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:30:34.814198  215318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:30:34.822618  215318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:30:34.952946  215318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:30:37.282820  215318 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.329786777s)
	I1019 17:30:37.282845  215318 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:30:37.282897  215318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:30:37.290463  215318 start.go:564] Will wait 60s for crictl version
	I1019 17:30:37.290528  215318 ssh_runner.go:195] Run: which crictl
	I1019 17:30:37.295051  215318 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:30:37.347393  215318 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:30:37.347478  215318 ssh_runner.go:195] Run: crio --version
	I1019 17:30:37.383004  215318 ssh_runner.go:195] Run: crio --version
	I1019 17:30:37.431267  215318 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:30:37.161069  217644 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-125363:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.748119371s)
	I1019 17:30:37.161125  217644 kic.go:203] duration metric: took 5.748302554s to extract preloaded images to volume ...
	W1019 17:30:37.161285  217644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:30:37.161398  217644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:30:37.247046  217644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-125363 --name old-k8s-version-125363 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-125363 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-125363 --network old-k8s-version-125363 --ip 192.168.85.2 --volume old-k8s-version-125363:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:30:37.641480  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Running}}
	I1019 17:30:37.673533  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:30:37.703154  217644 cli_runner.go:164] Run: docker exec old-k8s-version-125363 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:30:37.768215  217644 oci.go:144] the created container "old-k8s-version-125363" has a running status.
	I1019 17:30:37.768257  217644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa...
	I1019 17:30:38.208362  217644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:30:38.275068  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:30:38.326900  217644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:30:38.326918  217644 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-125363 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:30:38.414175  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:30:38.433927  217644 machine.go:94] provisionDockerMachine start ...
	I1019 17:30:38.434030  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:38.456499  217644 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:38.456837  217644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1019 17:30:38.456853  217644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:30:38.457555  217644 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44192->127.0.0.1:33083: read: connection reset by peer
	I1019 17:30:37.434463  215318 cli_runner.go:164] Run: docker network inspect bridge-953581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:30:37.459954  215318 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:30:37.464248  215318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:30:37.477401  215318 kubeadm.go:884] updating cluster {Name:bridge-953581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-953581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:30:37.477525  215318 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:30:37.477588  215318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:30:37.514676  215318 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:30:37.514703  215318 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:30:37.514758  215318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:30:37.549301  215318 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:30:37.549324  215318 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:30:37.549332  215318 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:30:37.549421  215318 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-953581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:bridge-953581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1019 17:30:37.549519  215318 ssh_runner.go:195] Run: crio config
	I1019 17:30:37.621831  215318 cni.go:84] Creating CNI manager for "bridge"
	I1019 17:30:37.621863  215318 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:30:37.621888  215318 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-953581 NodeName:bridge-953581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:30:37.622018  215318 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-953581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:30:37.622091  215318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:30:37.636001  215318 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:30:37.636075  215318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:30:37.646833  215318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 17:30:37.680391  215318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:30:37.702931  215318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 17:30:37.741819  215318 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:30:37.752877  215318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:30:37.772650  215318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:30:38.046377  215318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:30:38.085861  215318 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581 for IP: 192.168.76.2
	I1019 17:30:38.085903  215318 certs.go:195] generating shared ca certs ...
	I1019 17:30:38.085920  215318 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:38.086067  215318 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:30:38.086110  215318 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:30:38.086121  215318 certs.go:257] generating profile certs ...
	I1019 17:30:38.086179  215318 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.key
	I1019 17:30:38.086196  215318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt with IP's: []
	I1019 17:30:38.511884  215318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt ...
	I1019 17:30:38.511959  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: {Name:mkdfa298f0b7caee84faa099da3f5695f1c62e8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:38.512200  215318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.key ...
	I1019 17:30:38.512239  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.key: {Name:mkd61a2f1a0c4d1e5d6821be72a9ec102d45f7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:38.512391  215318 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key.791f3126
	I1019 17:30:38.512432  215318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt.791f3126 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:30:39.204255  215318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt.791f3126 ...
	I1019 17:30:39.204292  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt.791f3126: {Name:mk0aff53707f69558d603be12d39296d537a198c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:39.204467  215318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key.791f3126 ...
	I1019 17:30:39.204480  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key.791f3126: {Name:mk246b3cf33635351fdaaa1aae85b27a019e0f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:39.204563  215318 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt.791f3126 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt
	I1019 17:30:39.204639  215318 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key.791f3126 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key
	I1019 17:30:39.204711  215318 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.key
	I1019 17:30:39.204731  215318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.crt with IP's: []
	I1019 17:30:39.521904  215318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.crt ...
	I1019 17:30:39.521988  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.crt: {Name:mk7800def44fee4c592b4b5eebb69cd628835c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:39.522203  215318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.key ...
	I1019 17:30:39.522240  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.key: {Name:mk859be69b843b370acfd90a594eebb75cbb5ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:39.522668  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:30:39.522746  215318 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:30:39.522771  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:30:39.522831  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:30:39.522904  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:30:39.522953  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:30:39.523067  215318 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:30:39.523792  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:30:39.546194  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:30:39.578193  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:30:39.600130  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:30:39.624225  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:30:39.652426  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:30:39.676224  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:30:39.695782  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:30:39.715735  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:30:39.740203  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:30:39.762023  215318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:30:39.783974  215318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:30:39.812829  215318 ssh_runner.go:195] Run: openssl version
	I1019 17:30:39.822472  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:30:39.839784  215318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:39.846571  215318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:39.846631  215318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:39.905304  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:30:39.913639  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:30:39.923810  215318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:30:39.928258  215318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:30:39.928367  215318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:30:39.971570  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:30:39.980913  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:30:39.990497  215318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:30:39.994821  215318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:30:39.994896  215318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:30:40.038424  215318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:30:40.049827  215318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:30:40.055445  215318 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:30:40.055506  215318 kubeadm.go:401] StartCluster: {Name:bridge-953581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-953581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:30:40.055593  215318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:30:40.055660  215318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:30:40.088101  215318 cri.go:89] found id: ""
	I1019 17:30:40.088192  215318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:30:40.098138  215318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:30:40.107202  215318 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:30:40.107364  215318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:30:40.116368  215318 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:30:40.116389  215318 kubeadm.go:158] found existing configuration files:
	
	I1019 17:30:40.116448  215318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:30:40.125387  215318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:30:40.125479  215318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:30:40.133739  215318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:30:40.142694  215318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:30:40.142827  215318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:30:40.151662  215318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:30:40.160208  215318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:30:40.160281  215318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:30:40.168367  215318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:30:40.177687  215318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:30:40.177802  215318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:30:40.186493  215318 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:30:40.257903  215318 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:30:40.258182  215318 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:30:40.325249  215318 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:30:41.618997  217644 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:30:41.619072  217644 ubuntu.go:182] provisioning hostname "old-k8s-version-125363"
	I1019 17:30:41.619167  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:41.641903  217644 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:41.642224  217644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1019 17:30:41.642245  217644 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-125363 && echo "old-k8s-version-125363" | sudo tee /etc/hostname
	I1019 17:30:41.814588  217644 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:30:41.814671  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:41.836569  217644 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:41.836877  217644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1019 17:30:41.836900  217644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-125363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-125363/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-125363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:30:41.986939  217644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:30:41.987035  217644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:30:41.987085  217644 ubuntu.go:190] setting up certificates
	I1019 17:30:41.987112  217644 provision.go:84] configureAuth start
	I1019 17:30:41.987202  217644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:30:42.009207  217644 provision.go:143] copyHostCerts
	I1019 17:30:42.009287  217644 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:30:42.009297  217644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:30:42.009389  217644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:30:42.009527  217644 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:30:42.009533  217644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:30:42.009561  217644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:30:42.009626  217644 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:30:42.009631  217644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:30:42.009655  217644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:30:42.009712  217644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-125363 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-125363]
	I1019 17:30:42.549931  217644 provision.go:177] copyRemoteCerts
	I1019 17:30:42.549997  217644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:30:42.550041  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:42.569277  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:30:42.676172  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:30:42.696085  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1019 17:30:42.715643  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:30:42.735306  217644 provision.go:87] duration metric: took 748.158412ms to configureAuth
	I1019 17:30:42.735341  217644 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:30:42.735550  217644 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:30:42.735675  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:42.755989  217644 main.go:143] libmachine: Using SSH client type: native
	I1019 17:30:42.756308  217644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1019 17:30:42.756331  217644 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:30:43.043735  217644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:30:43.043763  217644 machine.go:97] duration metric: took 4.609817331s to provisionDockerMachine
	I1019 17:30:43.043772  217644 client.go:174] duration metric: took 12.319679775s to LocalClient.Create
	I1019 17:30:43.043800  217644 start.go:167] duration metric: took 12.319757282s to libmachine.API.Create "old-k8s-version-125363"
	I1019 17:30:43.043813  217644 start.go:293] postStartSetup for "old-k8s-version-125363" (driver="docker")
	I1019 17:30:43.043824  217644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:30:43.043897  217644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:30:43.043949  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:43.063348  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:30:43.167531  217644 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:30:43.171358  217644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:30:43.171391  217644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:30:43.171406  217644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:30:43.171465  217644 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:30:43.171548  217644 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:30:43.171652  217644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:30:43.179817  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:30:43.199313  217644 start.go:296] duration metric: took 155.484129ms for postStartSetup
	I1019 17:30:43.199723  217644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:30:43.217674  217644 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:30:43.218001  217644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:30:43.218047  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:43.244149  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:30:43.343801  217644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:30:43.349006  217644 start.go:128] duration metric: took 12.6285543s to createHost
	I1019 17:30:43.349025  217644 start.go:83] releasing machines lock for "old-k8s-version-125363", held for 12.628682073s
	I1019 17:30:43.349091  217644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:30:43.366621  217644 ssh_runner.go:195] Run: cat /version.json
	I1019 17:30:43.366673  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:43.366684  217644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:30:43.366759  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:30:43.394639  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:30:43.405259  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:30:43.502656  217644 ssh_runner.go:195] Run: systemctl --version
	I1019 17:30:43.618039  217644 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:30:43.667125  217644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:30:43.671933  217644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:30:43.672055  217644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:30:43.703087  217644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:30:43.703157  217644 start.go:496] detecting cgroup driver to use...
	I1019 17:30:43.703204  217644 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:30:43.703282  217644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:30:43.724815  217644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:30:43.739525  217644 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:30:43.739585  217644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:30:43.761242  217644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:30:43.781322  217644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:30:43.930469  217644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:30:44.097440  217644 docker.go:234] disabling docker service ...
	I1019 17:30:44.097512  217644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:30:44.120291  217644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:30:44.135139  217644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:30:44.278039  217644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:30:44.438679  217644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:30:44.456169  217644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:30:44.471416  217644 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 17:30:44.471474  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.480832  217644 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:30:44.480895  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.490156  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.499390  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.508760  217644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:30:44.518037  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.527613  217644 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.541939  217644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:30:44.551549  217644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:30:44.563532  217644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:30:44.571969  217644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:30:44.709656  217644 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:30:44.855947  217644 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:30:44.856060  217644 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:30:44.868028  217644 start.go:564] Will wait 60s for crictl version
	I1019 17:30:44.868143  217644 ssh_runner.go:195] Run: which crictl
	I1019 17:30:44.872037  217644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:30:44.896508  217644 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:30:44.896677  217644 ssh_runner.go:195] Run: crio --version
	I1019 17:30:44.933148  217644 ssh_runner.go:195] Run: crio --version
	I1019 17:30:44.974456  217644 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 17:30:44.977172  217644 cli_runner.go:164] Run: docker network inspect old-k8s-version-125363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:30:44.996117  217644 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:30:45.001168  217644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:30:45.019062  217644 kubeadm.go:884] updating cluster {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:30:45.019191  217644 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:30:45.019258  217644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:30:45.104070  217644 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:30:45.104100  217644 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:30:45.104161  217644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:30:45.143685  217644 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:30:45.143717  217644 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:30:45.143726  217644 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 17:30:45.143822  217644 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-125363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:30:45.143920  217644 ssh_runner.go:195] Run: crio config
	I1019 17:30:45.214816  217644 cni.go:84] Creating CNI manager for ""
	I1019 17:30:45.214848  217644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:30:45.214875  217644 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:30:45.214930  217644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-125363 NodeName:old-k8s-version-125363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:30:45.215132  217644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-125363"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:30:45.215218  217644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 17:30:45.230758  217644 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:30:45.230833  217644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:30:45.250490  217644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1019 17:30:45.278889  217644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:30:45.306052  217644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1019 17:30:45.334155  217644 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:30:45.339188  217644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:30:45.359970  217644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:30:45.548254  217644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:30:45.566624  217644 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363 for IP: 192.168.85.2
	I1019 17:30:45.566698  217644 certs.go:195] generating shared ca certs ...
	I1019 17:30:45.566731  217644 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:45.566891  217644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:30:45.566959  217644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:30:45.566981  217644 certs.go:257] generating profile certs ...
	I1019 17:30:45.567063  217644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.key
	I1019 17:30:45.567112  217644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt with IP's: []
	I1019 17:30:46.068318  217644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt ...
	I1019 17:30:46.068355  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt: {Name:mkb753c43a6bbb16d427b00e2e4167b8276c3e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:46.068811  217644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.key ...
	I1019 17:30:46.068835  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.key: {Name:mk998db59f65426b58734d47b257e76a0cd0164d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:46.068961  217644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795
	I1019 17:30:46.068987  217644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt.02194795 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:30:46.794950  217644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt.02194795 ...
	I1019 17:30:46.794982  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt.02194795: {Name:mkd92021e34c61f19fa3a8058b3900b46c9d0c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:46.795147  217644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795 ...
	I1019 17:30:46.795163  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795: {Name:mk63723f9c767a8ee6a2f7834f1ed4ab7cca0502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:46.795237  217644 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt.02194795 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt
	I1019 17:30:46.795316  217644 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key
	I1019 17:30:46.795378  217644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key
	I1019 17:30:46.795397  217644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt with IP's: []
	I1019 17:30:48.623211  217644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt ...
	I1019 17:30:48.623288  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt: {Name:mk5e369d9848877872a951f6b3bdeb6f829d85cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:48.623539  217644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key ...
	I1019 17:30:48.623574  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key: {Name:mk5aa2f4b0e581c5c9d47fc584c91cebc449978b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:30:48.623829  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:30:48.623893  217644 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:30:48.623918  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:30:48.623981  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:30:48.624029  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:30:48.624068  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:30:48.624139  217644 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:30:48.624811  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:30:48.651677  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:30:48.676035  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:30:48.701808  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:30:48.729633  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1019 17:30:48.759820  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:30:48.791592  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:30:48.832996  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:30:48.853739  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:30:48.873407  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:30:48.894147  217644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:30:48.913927  217644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:30:48.928706  217644 ssh_runner.go:195] Run: openssl version
	I1019 17:30:48.935451  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:30:48.944883  217644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:48.950912  217644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:48.950993  217644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:30:49.004080  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:30:49.013802  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:30:49.023224  217644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:30:49.027764  217644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:30:49.027891  217644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:30:49.082702  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:30:49.095748  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:30:49.109529  217644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:30:49.114027  217644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:30:49.114138  217644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:30:49.168736  217644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:30:49.178164  217644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:30:49.182743  217644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:30:49.182860  217644 kubeadm.go:401] StartCluster: {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:30:49.182976  217644 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:30:49.183085  217644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:30:49.221051  217644 cri.go:89] found id: ""
	I1019 17:30:49.221205  217644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:30:49.237736  217644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:30:49.248940  217644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:30:49.249054  217644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:30:49.261552  217644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:30:49.261619  217644 kubeadm.go:158] found existing configuration files:
	
	I1019 17:30:49.261698  217644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:30:49.272287  217644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:30:49.272368  217644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:30:49.284374  217644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:30:49.319039  217644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:30:49.319108  217644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:30:49.337179  217644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:30:49.359841  217644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:30:49.359908  217644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:30:49.387047  217644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:30:49.410642  217644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:30:49.410716  217644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:30:49.423804  217644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:30:49.506122  217644 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1019 17:30:49.506474  217644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:30:49.582960  217644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:30:49.583039  217644 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:30:49.583081  217644 kubeadm.go:319] OS: Linux
	I1019 17:30:49.583138  217644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:30:49.583193  217644 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:30:49.583248  217644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:30:49.583303  217644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:30:49.583357  217644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:30:49.583410  217644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:30:49.583463  217644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:30:49.583517  217644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:30:49.583569  217644 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:30:49.742625  217644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:30:49.742750  217644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:30:49.742853  217644 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1019 17:30:50.110984  217644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:30:50.117022  217644 out.go:252]   - Generating certificates and keys ...
	I1019 17:30:50.117125  217644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:30:50.117202  217644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:30:50.396436  217644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:30:50.626874  217644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:30:51.151516  217644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:30:51.344860  217644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:30:51.912530  217644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:30:51.913228  217644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-125363] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:30:52.361523  217644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:30:52.362106  217644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-125363] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:30:52.833590  217644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:30:53.202564  217644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:30:54.426543  217644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:30:54.427118  217644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:30:55.312465  217644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:30:56.425268  217644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:30:56.958399  217644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:30:57.410172  217644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:30:57.413542  217644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:30:57.416891  217644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:30:59.274325  215318 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:30:59.274386  215318 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:30:59.274486  215318 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:30:59.274573  215318 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:30:59.274614  215318 kubeadm.go:319] OS: Linux
	I1019 17:30:59.274666  215318 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:30:59.274721  215318 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:30:59.274773  215318 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:30:59.274828  215318 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:30:59.274882  215318 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:30:59.274935  215318 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:30:59.274985  215318 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:30:59.275039  215318 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:30:59.275089  215318 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:30:59.275168  215318 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:30:59.275270  215318 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:30:59.275368  215318 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:30:59.275435  215318 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:30:59.282967  215318 out.go:252]   - Generating certificates and keys ...
	I1019 17:30:59.283082  215318 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:30:59.283165  215318 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:30:59.283244  215318 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:30:59.283307  215318 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:30:59.283387  215318 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:30:59.283468  215318 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:30:59.283529  215318 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:30:59.283663  215318 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-953581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:30:59.283722  215318 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:30:59.283847  215318 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-953581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:30:59.283920  215318 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:30:59.283990  215318 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:30:59.284045  215318 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:30:59.284107  215318 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:30:59.284164  215318 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:30:59.284228  215318 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:30:59.284288  215318 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:30:59.284358  215318 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:30:59.284418  215318 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:30:59.284506  215318 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:30:59.284579  215318 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:30:59.287455  215318 out.go:252]   - Booting up control plane ...
	I1019 17:30:59.287617  215318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:30:59.287712  215318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:30:59.287797  215318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:30:59.287916  215318 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:30:59.288023  215318 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:30:59.288141  215318 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:30:59.288238  215318 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:30:59.288283  215318 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:30:59.288430  215318 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:30:59.288555  215318 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:30:59.288625  215318 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001521625s
	I1019 17:30:59.288731  215318 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:30:59.288823  215318 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 17:30:59.288926  215318 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:30:59.289016  215318 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:30:59.289103  215318 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.185997314s
	I1019 17:30:59.289181  215318 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.45297399s
	I1019 17:30:59.289260  215318 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.503987092s
	I1019 17:30:59.289381  215318 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:30:59.289523  215318 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:30:59.289591  215318 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:30:59.289808  215318 kubeadm.go:319] [mark-control-plane] Marking the node bridge-953581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:30:59.289876  215318 kubeadm.go:319] [bootstrap-token] Using token: 2cmamm.rsa6p0e6uehntdza
	I1019 17:30:59.293704  215318 out.go:252]   - Configuring RBAC rules ...
	I1019 17:30:59.293844  215318 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:30:59.293945  215318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:30:59.294135  215318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:30:59.294283  215318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:30:59.294415  215318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:30:59.294514  215318 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:30:59.294758  215318 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:30:59.294811  215318 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:30:59.294865  215318 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:30:59.294870  215318 kubeadm.go:319] 
	I1019 17:30:59.294940  215318 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:30:59.294946  215318 kubeadm.go:319] 
	I1019 17:30:59.295031  215318 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:30:59.295036  215318 kubeadm.go:319] 
	I1019 17:30:59.295064  215318 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:30:59.295129  215318 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:30:59.295185  215318 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:30:59.295190  215318 kubeadm.go:319] 
	I1019 17:30:59.295253  215318 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:30:59.295258  215318 kubeadm.go:319] 
	I1019 17:30:59.295310  215318 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:30:59.295315  215318 kubeadm.go:319] 
	I1019 17:30:59.295373  215318 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:30:59.295456  215318 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:30:59.295545  215318 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:30:59.295550  215318 kubeadm.go:319] 
	I1019 17:30:59.295646  215318 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:30:59.295732  215318 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:30:59.295736  215318 kubeadm.go:319] 
	I1019 17:30:59.295829  215318 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2cmamm.rsa6p0e6uehntdza \
	I1019 17:30:59.295944  215318 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:30:59.295966  215318 kubeadm.go:319] 	--control-plane 
	I1019 17:30:59.295971  215318 kubeadm.go:319] 
	I1019 17:30:59.296066  215318 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:30:59.296070  215318 kubeadm.go:319] 
	I1019 17:30:59.296162  215318 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2cmamm.rsa6p0e6uehntdza \
	I1019 17:30:59.296289  215318 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:30:59.296297  215318 cni.go:84] Creating CNI manager for "bridge"
	I1019 17:30:59.300167  215318 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 17:30:57.420254  217644 out.go:252]   - Booting up control plane ...
	I1019 17:30:57.420416  217644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:30:57.421234  217644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:30:57.422477  217644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:30:57.442731  217644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:30:57.443742  217644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:30:57.443982  217644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:30:57.622472  217644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1019 17:30:59.303320  215318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 17:30:59.312368  215318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 17:30:59.344023  215318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:30:59.344151  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:30:59.344224  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-953581 minikube.k8s.io/updated_at=2025_10_19T17_30_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=bridge-953581 minikube.k8s.io/primary=true
	I1019 17:30:59.664855  215318 ops.go:34] apiserver oom_adj: -16
	I1019 17:30:59.664985  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:00.165434  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:00.665902  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:01.165807  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:01.666016  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:02.165480  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:02.665069  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:03.165425  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:03.665671  215318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:03.786654  215318 kubeadm.go:1114] duration metric: took 4.442545447s to wait for elevateKubeSystemPrivileges
	I1019 17:31:03.786684  215318 kubeadm.go:403] duration metric: took 23.731183245s to StartCluster
	I1019 17:31:03.786701  215318 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:31:03.786762  215318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:31:03.787446  215318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:31:03.787662  215318 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:31:03.787747  215318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:31:03.787987  215318 config.go:182] Loaded profile config "bridge-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:31:03.788003  215318 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:31:03.788107  215318 addons.go:70] Setting storage-provisioner=true in profile "bridge-953581"
	I1019 17:31:03.788127  215318 addons.go:239] Setting addon storage-provisioner=true in "bridge-953581"
	I1019 17:31:03.788140  215318 addons.go:70] Setting default-storageclass=true in profile "bridge-953581"
	I1019 17:31:03.788151  215318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-953581"
	I1019 17:31:03.788482  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:31:03.788656  215318 host.go:66] Checking if "bridge-953581" exists ...
	I1019 17:31:03.789151  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:31:03.795225  215318 out.go:179] * Verifying Kubernetes components...
	I1019 17:31:03.801491  215318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:31:03.839093  215318 addons.go:239] Setting addon default-storageclass=true in "bridge-953581"
	I1019 17:31:03.839133  215318 host.go:66] Checking if "bridge-953581" exists ...
	I1019 17:31:03.839557  215318 cli_runner.go:164] Run: docker container inspect bridge-953581 --format={{.State.Status}}
	I1019 17:31:03.843437  215318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:31:03.846330  215318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:31:03.846365  215318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:31:03.846431  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:31:03.869898  215318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:31:03.869920  215318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:31:03.869983  215318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-953581
	I1019 17:31:03.895818  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:31:03.914352  215318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/bridge-953581/id_rsa Username:docker}
	I1019 17:31:04.360933  215318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:31:04.368852  215318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:31:04.518842  215318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:31:04.518955  215318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:31:05.693940  215318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.332976202s)
	I1019 17:31:05.694064  215318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325192311s)
	I1019 17:31:05.694489  215318 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.175517322s)
	I1019 17:31:05.695294  215318 node_ready.go:35] waiting up to 15m0s for node "bridge-953581" to be "Ready" ...
	I1019 17:31:05.695636  215318 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.176765311s)
	I1019 17:31:05.695686  215318 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 17:31:05.741153  215318 node_ready.go:49] node "bridge-953581" is "Ready"
	I1019 17:31:05.741181  215318 node_ready.go:38] duration metric: took 45.83255ms for node "bridge-953581" to be "Ready" ...
	I1019 17:31:05.741193  215318 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:31:05.741250  215318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:31:05.761172  215318 api_server.go:72] duration metric: took 1.973474722s to wait for apiserver process to appear ...
	I1019 17:31:05.761194  215318 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:31:05.761212  215318 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:31:05.786911  215318 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:31:05.793321  215318 api_server.go:141] control plane version: v1.34.1
	I1019 17:31:05.793389  215318 api_server.go:131] duration metric: took 32.188183ms to wait for apiserver health ...
	I1019 17:31:05.793411  215318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:31:05.796257  215318 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:31:05.799270  215318 addons.go:515] duration metric: took 2.011247815s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:31:05.806617  215318 system_pods.go:59] 8 kube-system pods found
	I1019 17:31:05.806654  215318 system_pods.go:61] "coredns-66bc5c9577-np85p" [41a9c44d-ad59-4ea4-8c46-e825953ee8ce] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:05.806662  215318 system_pods.go:61] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:05.806672  215318 system_pods.go:61] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:05.806677  215318 system_pods.go:61] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:05.806683  215318 system_pods.go:61] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:05.806689  215318 system_pods.go:61] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:31:05.806694  215318 system_pods.go:61] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:05.806698  215318 system_pods.go:61] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Pending
	I1019 17:31:05.806702  215318 system_pods.go:74] duration metric: took 13.258555ms to wait for pod list to return data ...
	I1019 17:31:05.806710  215318 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:31:05.830046  215318 default_sa.go:45] found service account: "default"
	I1019 17:31:05.830071  215318 default_sa.go:55] duration metric: took 23.35492ms for default service account to be created ...
	I1019 17:31:05.830081  215318 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:31:05.850237  215318 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:05.850375  215318 system_pods.go:89] "coredns-66bc5c9577-np85p" [41a9c44d-ad59-4ea4-8c46-e825953ee8ce] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:05.850442  215318 system_pods.go:89] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:05.850477  215318 system_pods.go:89] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:05.850507  215318 system_pods.go:89] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:05.850573  215318 system_pods.go:89] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:05.850628  215318 system_pods.go:89] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:31:05.850662  215318 system_pods.go:89] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:05.850684  215318 system_pods.go:89] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Pending
	I1019 17:31:05.850748  215318 retry.go:31] will retry after 201.466861ms: missing components: kube-dns, kube-proxy
	I1019 17:31:06.083185  215318 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:06.083327  215318 system_pods.go:89] "coredns-66bc5c9577-np85p" [41a9c44d-ad59-4ea4-8c46-e825953ee8ce] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.083353  215318 system_pods.go:89] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.083396  215318 system_pods.go:89] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:06.083422  215318 system_pods.go:89] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:06.083446  215318 system_pods.go:89] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:06.083486  215318 system_pods.go:89] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:31:06.083514  215318 system_pods.go:89] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:06.083536  215318 system_pods.go:89] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Pending
	I1019 17:31:06.083584  215318 retry.go:31] will retry after 248.726158ms: missing components: kube-dns, kube-proxy
	I1019 17:31:06.199989  215318 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-953581" context rescaled to 1 replicas
	I1019 17:31:06.339406  215318 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:06.339488  215318 system_pods.go:89] "coredns-66bc5c9577-np85p" [41a9c44d-ad59-4ea4-8c46-e825953ee8ce] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.339511  215318 system_pods.go:89] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.339532  215318 system_pods.go:89] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:06.339569  215318 system_pods.go:89] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:06.339589  215318 system_pods.go:89] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:06.339607  215318 system_pods.go:89] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Running
	I1019 17:31:06.339627  215318 system_pods.go:89] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:06.339654  215318 system_pods.go:89] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:06.339686  215318 retry.go:31] will retry after 320.035053ms: missing components: kube-dns
	I1019 17:31:06.626385  217644 kubeadm.go:319] [apiclient] All control plane components are healthy after 9.003795 seconds
	I1019 17:31:06.626568  217644 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:31:06.647246  217644 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:31:07.192806  217644 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:31:07.193018  217644 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-125363 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:31:07.711059  217644 kubeadm.go:319] [bootstrap-token] Using token: 0b4ahd.ypot9znt4t2679qk
	I1019 17:31:07.713998  217644 out.go:252]   - Configuring RBAC rules ...
	I1019 17:31:07.714146  217644 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:31:07.722286  217644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:31:07.731576  217644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:31:07.735966  217644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:31:07.741750  217644 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:31:07.746266  217644 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:31:07.761987  217644 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:31:08.076105  217644 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:31:08.161630  217644 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:31:08.163285  217644 kubeadm.go:319] 
	I1019 17:31:08.163355  217644 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:31:08.163361  217644 kubeadm.go:319] 
	I1019 17:31:08.163438  217644 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:31:08.163443  217644 kubeadm.go:319] 
	I1019 17:31:08.163469  217644 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:31:08.163648  217644 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:31:08.163711  217644 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:31:08.163716  217644 kubeadm.go:319] 
	I1019 17:31:08.163772  217644 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:31:08.163776  217644 kubeadm.go:319] 
	I1019 17:31:08.163826  217644 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:31:08.163830  217644 kubeadm.go:319] 
	I1019 17:31:08.163884  217644 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:31:08.163962  217644 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:31:08.164034  217644 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:31:08.164038  217644 kubeadm.go:319] 
	I1019 17:31:08.164127  217644 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:31:08.164208  217644 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:31:08.164212  217644 kubeadm.go:319] 
	I1019 17:31:08.164299  217644 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0b4ahd.ypot9znt4t2679qk \
	I1019 17:31:08.164406  217644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:31:08.164562  217644 kubeadm.go:319] 	--control-plane 
	I1019 17:31:08.164572  217644 kubeadm.go:319] 
	I1019 17:31:08.164656  217644 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:31:08.164661  217644 kubeadm.go:319] 
	I1019 17:31:08.164742  217644 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0b4ahd.ypot9znt4t2679qk \
	I1019 17:31:08.164844  217644 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:31:08.170292  217644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:31:08.170411  217644 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:31:08.170429  217644 cni.go:84] Creating CNI manager for ""
	I1019 17:31:08.170437  217644 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:31:08.173903  217644 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:31:08.176690  217644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:31:08.181636  217644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1019 17:31:08.181702  217644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:31:08.222741  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:31:09.214418  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:09.214562  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-125363 minikube.k8s.io/updated_at=2025_10_19T17_31_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=old-k8s-version-125363 minikube.k8s.io/primary=true
	I1019 17:31:09.214637  217644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:31:09.349563  217644 ops.go:34] apiserver oom_adj: -16
	I1019 17:31:09.349672  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:09.849907  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:10.349951  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:06.664330  215318 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:06.664378  215318 system_pods.go:89] "coredns-66bc5c9577-np85p" [41a9c44d-ad59-4ea4-8c46-e825953ee8ce] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.664401  215318 system_pods.go:89] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:06.664413  215318 system_pods.go:89] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:06.664427  215318 system_pods.go:89] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:06.664446  215318 system_pods.go:89] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:06.664478  215318 system_pods.go:89] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Running
	I1019 17:31:06.664489  215318 system_pods.go:89] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:06.664498  215318 system_pods.go:89] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:06.664518  215318 retry.go:31] will retry after 494.154232ms: missing components: kube-dns
	I1019 17:31:07.162714  215318 system_pods.go:86] 7 kube-system pods found
	I1019 17:31:07.162753  215318 system_pods.go:89] "coredns-66bc5c9577-p7rz6" [bd0690a7-5670-4306-9cfc-cf3b90ff786a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:07.162760  215318 system_pods.go:89] "etcd-bridge-953581" [d0cb8162-07bc-4314-ba72-2e9d447bf722] Running
	I1019 17:31:07.162765  215318 system_pods.go:89] "kube-apiserver-bridge-953581" [f054c068-55c1-4c72-94bd-8f9a7b72c7bf] Running
	I1019 17:31:07.162773  215318 system_pods.go:89] "kube-controller-manager-bridge-953581" [368cb1be-b318-4293-af66-b31d4e63828b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:31:07.162777  215318 system_pods.go:89] "kube-proxy-h62dk" [f2e00b66-cb86-4b79-aadd-11dae288cce4] Running
	I1019 17:31:07.162782  215318 system_pods.go:89] "kube-scheduler-bridge-953581" [e374111b-149b-4376-a340-811fbe5ef6d4] Running
	I1019 17:31:07.162786  215318 system_pods.go:89] "storage-provisioner" [be83ba02-7077-4e05-b9c7-deba71fe3fb4] Running
	I1019 17:31:07.162794  215318 system_pods.go:126] duration metric: took 1.332705832s to wait for k8s-apps to be running ...
	I1019 17:31:07.162806  215318 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:31:07.162863  215318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:31:07.180837  215318 system_svc.go:56] duration metric: took 18.004541ms WaitForService to wait for kubelet
	I1019 17:31:07.180912  215318 kubeadm.go:587] duration metric: took 3.393219356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:31:07.180947  215318 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:31:07.184894  215318 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:31:07.184971  215318 node_conditions.go:123] node cpu capacity is 2
	I1019 17:31:07.185000  215318 node_conditions.go:105] duration metric: took 4.032924ms to run NodePressure ...
	I1019 17:31:07.185024  215318 start.go:242] waiting for startup goroutines ...
	I1019 17:31:07.185057  215318 start.go:247] waiting for cluster config update ...
	I1019 17:31:07.185087  215318 start.go:256] writing updated cluster config ...
	I1019 17:31:07.185446  215318 ssh_runner.go:195] Run: rm -f paused
	I1019 17:31:07.189579  215318 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:31:07.195727  215318 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7rz6" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:31:09.202127  215318 pod_ready.go:104] pod "coredns-66bc5c9577-p7rz6" is not "Ready", error: <nil>
	W1019 17:31:11.204394  215318 pod_ready.go:104] pod "coredns-66bc5c9577-p7rz6" is not "Ready", error: <nil>
	I1019 17:31:10.850466  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:11.349947  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:11.850516  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:12.349798  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:12.850605  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:13.350703  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:13.850364  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:14.349821  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:14.850768  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:15.350048  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 17:31:13.701073  215318 pod_ready.go:104] pod "coredns-66bc5c9577-p7rz6" is not "Ready", error: <nil>
	I1019 17:31:14.701943  215318 pod_ready.go:94] pod "coredns-66bc5c9577-p7rz6" is "Ready"
	I1019 17:31:14.701970  215318 pod_ready.go:86] duration metric: took 7.506214476s for pod "coredns-66bc5c9577-p7rz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.704552  215318 pod_ready.go:83] waiting for pod "etcd-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.709218  215318 pod_ready.go:94] pod "etcd-bridge-953581" is "Ready"
	I1019 17:31:14.709247  215318 pod_ready.go:86] duration metric: took 4.670547ms for pod "etcd-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.711489  215318 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.716016  215318 pod_ready.go:94] pod "kube-apiserver-bridge-953581" is "Ready"
	I1019 17:31:14.716042  215318 pod_ready.go:86] duration metric: took 4.52821ms for pod "kube-apiserver-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.718343  215318 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:14.900382  215318 pod_ready.go:94] pod "kube-controller-manager-bridge-953581" is "Ready"
	I1019 17:31:14.900456  215318 pod_ready.go:86] duration metric: took 182.085155ms for pod "kube-controller-manager-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:15.100518  215318 pod_ready.go:83] waiting for pod "kube-proxy-h62dk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:15.499822  215318 pod_ready.go:94] pod "kube-proxy-h62dk" is "Ready"
	I1019 17:31:15.499847  215318 pod_ready.go:86] duration metric: took 399.29574ms for pod "kube-proxy-h62dk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:15.700376  215318 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:16.101061  215318 pod_ready.go:94] pod "kube-scheduler-bridge-953581" is "Ready"
	I1019 17:31:16.101088  215318 pod_ready.go:86] duration metric: took 400.686499ms for pod "kube-scheduler-bridge-953581" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:16.101102  215318 pod_ready.go:40] duration metric: took 8.911450388s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:31:16.166666  215318 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:31:16.170205  215318 out.go:179] * Done! kubectl is now configured to use "bridge-953581" cluster and "default" namespace by default
	I1019 17:31:15.850192  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:16.350691  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:16.849767  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:17.350416  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:17.850392  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:18.349843  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:18.849775  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:19.350484  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:19.849754  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:20.350462  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:20.850624  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:21.350415  217644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:31:21.498580  217644 kubeadm.go:1114] duration metric: took 12.284212833s to wait for elevateKubeSystemPrivileges
	I1019 17:31:21.498636  217644 kubeadm.go:403] duration metric: took 32.315775333s to StartCluster
	I1019 17:31:21.498663  217644 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:31:21.498754  217644 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:31:21.499940  217644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:31:21.500227  217644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:31:21.500241  217644 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:31:21.500508  217644 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:31:21.500540  217644 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:31:21.500601  217644 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-125363"
	I1019 17:31:21.500620  217644 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-125363"
	I1019 17:31:21.500644  217644 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:31:21.501127  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:31:21.501649  217644 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-125363"
	I1019 17:31:21.501694  217644 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-125363"
	I1019 17:31:21.502098  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:31:21.505021  217644 out.go:179] * Verifying Kubernetes components...
	I1019 17:31:21.513503  217644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:31:21.541464  217644 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-125363"
	I1019 17:31:21.541503  217644 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:31:21.541903  217644 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:31:21.551543  217644 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:31:21.554946  217644 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:31:21.554968  217644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:31:21.555040  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:31:21.584069  217644 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:31:21.584090  217644 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:31:21.584164  217644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:31:21.605619  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:31:21.616638  217644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:31:21.879058  217644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:31:21.879268  217644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:31:21.970750  217644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:31:21.990588  217644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:31:22.679792  217644 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:31:22.680614  217644 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:31:23.185738  217644 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-125363" context rescaled to 1 replicas
	I1019 17:31:23.213817  217644 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.223142392s)
	I1019 17:31:23.217102  217644 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 17:31:23.220149  217644 addons.go:515] duration metric: took 1.719579837s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1019 17:31:24.684663  217644 node_ready.go:57] node "old-k8s-version-125363" has "Ready":"False" status (will retry)
	W1019 17:31:27.184325  217644 node_ready.go:57] node "old-k8s-version-125363" has "Ready":"False" status (will retry)
	W1019 17:31:29.683421  217644 node_ready.go:57] node "old-k8s-version-125363" has "Ready":"False" status (will retry)
	W1019 17:31:31.684051  217644 node_ready.go:57] node "old-k8s-version-125363" has "Ready":"False" status (will retry)
	W1019 17:31:34.185733  217644 node_ready.go:57] node "old-k8s-version-125363" has "Ready":"False" status (will retry)
	I1019 17:31:35.684031  217644 node_ready.go:49] node "old-k8s-version-125363" is "Ready"
	I1019 17:31:35.684074  217644 node_ready.go:38] duration metric: took 13.003417313s for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:31:35.684088  217644 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:31:35.684179  217644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:31:35.701832  217644 api_server.go:72] duration metric: took 14.201554908s to wait for apiserver process to appear ...
	I1019 17:31:35.701853  217644 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:31:35.701871  217644 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:31:35.712646  217644 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:31:35.714245  217644 api_server.go:141] control plane version: v1.28.0
	I1019 17:31:35.714269  217644 api_server.go:131] duration metric: took 12.409516ms to wait for apiserver health ...
	I1019 17:31:35.714278  217644 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:31:35.719191  217644 system_pods.go:59] 8 kube-system pods found
	I1019 17:31:35.719234  217644 system_pods.go:61] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:35.719243  217644 system_pods.go:61] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running
	I1019 17:31:35.719249  217644 system_pods.go:61] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:31:35.719254  217644 system_pods.go:61] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running
	I1019 17:31:35.719260  217644 system_pods.go:61] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running
	I1019 17:31:35.719264  217644 system_pods.go:61] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:31:35.719269  217644 system_pods.go:61] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running
	I1019 17:31:35.719280  217644 system_pods.go:61] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:35.719286  217644 system_pods.go:74] duration metric: took 5.002678ms to wait for pod list to return data ...
	I1019 17:31:35.719299  217644 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:31:35.722194  217644 default_sa.go:45] found service account: "default"
	I1019 17:31:35.722222  217644 default_sa.go:55] duration metric: took 2.917096ms for default service account to be created ...
	I1019 17:31:35.722242  217644 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:31:35.726085  217644 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:35.726115  217644 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:35.726122  217644 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running
	I1019 17:31:35.726130  217644 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:31:35.726134  217644 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running
	I1019 17:31:35.726139  217644 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running
	I1019 17:31:35.726143  217644 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:31:35.726152  217644 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running
	I1019 17:31:35.726158  217644 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:35.726191  217644 retry.go:31] will retry after 189.062427ms: missing components: kube-dns
	I1019 17:31:35.921495  217644 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:35.921531  217644 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:35.921539  217644 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running
	I1019 17:31:35.921544  217644 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:31:35.921549  217644 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running
	I1019 17:31:35.921554  217644 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running
	I1019 17:31:35.921559  217644 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:31:35.921563  217644 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running
	I1019 17:31:35.921568  217644 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:35.921584  217644 retry.go:31] will retry after 317.249664ms: missing components: kube-dns
	I1019 17:31:36.242869  217644 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:36.242907  217644 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:31:36.242914  217644 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running
	I1019 17:31:36.242922  217644 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:31:36.242926  217644 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running
	I1019 17:31:36.242937  217644 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running
	I1019 17:31:36.242942  217644 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:31:36.242947  217644 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running
	I1019 17:31:36.242954  217644 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:31:36.242972  217644 retry.go:31] will retry after 466.744763ms: missing components: kube-dns
	I1019 17:31:36.715244  217644 system_pods.go:86] 8 kube-system pods found
	I1019 17:31:36.715271  217644 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Running
	I1019 17:31:36.715278  217644 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running
	I1019 17:31:36.715282  217644 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:31:36.715286  217644 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running
	I1019 17:31:36.715292  217644 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running
	I1019 17:31:36.715296  217644 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:31:36.715301  217644 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running
	I1019 17:31:36.715305  217644 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Running
	I1019 17:31:36.715313  217644 system_pods.go:126] duration metric: took 993.064984ms to wait for k8s-apps to be running ...
	I1019 17:31:36.715322  217644 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:31:36.715450  217644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:31:36.729784  217644 system_svc.go:56] duration metric: took 14.453145ms WaitForService to wait for kubelet
	I1019 17:31:36.729808  217644 kubeadm.go:587] duration metric: took 15.229537075s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:31:36.729829  217644 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:31:36.732789  217644 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:31:36.732816  217644 node_conditions.go:123] node cpu capacity is 2
	I1019 17:31:36.732831  217644 node_conditions.go:105] duration metric: took 2.995768ms to run NodePressure ...
	I1019 17:31:36.732843  217644 start.go:242] waiting for startup goroutines ...
	I1019 17:31:36.732850  217644 start.go:247] waiting for cluster config update ...
	I1019 17:31:36.732861  217644 start.go:256] writing updated cluster config ...
	I1019 17:31:36.733139  217644 ssh_runner.go:195] Run: rm -f paused
	I1019 17:31:36.745010  217644 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:31:36.750050  217644 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.761049  217644 pod_ready.go:94] pod "coredns-5dd5756b68-28psj" is "Ready"
	I1019 17:31:36.761118  217644 pod_ready.go:86] duration metric: took 11.043003ms for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.769018  217644 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.778254  217644 pod_ready.go:94] pod "etcd-old-k8s-version-125363" is "Ready"
	I1019 17:31:36.778278  217644 pod_ready.go:86] duration metric: took 9.238287ms for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.782211  217644 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.788746  217644 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-125363" is "Ready"
	I1019 17:31:36.788812  217644 pod_ready.go:86] duration metric: took 6.565199ms for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:36.793050  217644 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:37.151850  217644 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-125363" is "Ready"
	I1019 17:31:37.151875  217644 pod_ready.go:86] duration metric: took 358.756687ms for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:37.350975  217644 pod_ready.go:83] waiting for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:37.749652  217644 pod_ready.go:94] pod "kube-proxy-zjv4r" is "Ready"
	I1019 17:31:37.749682  217644 pod_ready.go:86] duration metric: took 398.684684ms for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:37.950180  217644 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:38.349601  217644 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-125363" is "Ready"
	I1019 17:31:38.349624  217644 pod_ready.go:86] duration metric: took 399.420667ms for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:31:38.349635  217644 pod_ready.go:40] duration metric: took 1.604594977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:31:38.438792  217644 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1019 17:31:38.442085  217644 out.go:203] 
	W1019 17:31:38.445222  217644 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 17:31:38.448311  217644 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 17:31:38.451309  217644 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-125363" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:31:35 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:35.887577246Z" level=info msg="Created container bf7a60b01ff161e6884d13cee1a7a73fa471407d0fa2fff51d1054bc77a2a600: kube-system/coredns-5dd5756b68-28psj/coredns" id=718b8c1f-bb98-4ef7-a3cd-ea3e73fdbed9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:31:35 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:35.888602962Z" level=info msg="Starting container: bf7a60b01ff161e6884d13cee1a7a73fa471407d0fa2fff51d1054bc77a2a600" id=d23745f3-9203-4103-aa60-28b050977e8d name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:31:35 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:35.890378254Z" level=info msg="Started container" PID=1954 containerID=bf7a60b01ff161e6884d13cee1a7a73fa471407d0fa2fff51d1054bc77a2a600 description=kube-system/coredns-5dd5756b68-28psj/coredns id=d23745f3-9203-4103-aa60-28b050977e8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=946d0edb8cec1baca5e80bd27461695e2a8ae0fe4bd2953f7e9d510723bbd435
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.021544869Z" level=info msg="Running pod sandbox: default/busybox/POD" id=93315747-4557-4e29-b2b5-c5d74e958e14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.021622573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.029300989Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25 UID:619df5fa-7c94-408b-8f0c-3fa2d4f82639 NetNS:/var/run/netns/e6a2e213-5996-445c-a6d5-11ebc8eb8c33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004cb5f8}] Aliases:map[]}"
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.029340473Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.067772208Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25 UID:619df5fa-7c94-408b-8f0c-3fa2d4f82639 NetNS:/var/run/netns/e6a2e213-5996-445c-a6d5-11ebc8eb8c33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004cb5f8}] Aliases:map[]}"
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.067984257Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.075982588Z" level=info msg="Ran pod sandbox 71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25 with infra container: default/busybox/POD" id=93315747-4557-4e29-b2b5-c5d74e958e14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.079747592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=799aef98-0c60-4d0a-9d84-5dc6e17a50da name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.080051047Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=799aef98-0c60-4d0a-9d84-5dc6e17a50da name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.080192128Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=799aef98-0c60-4d0a-9d84-5dc6e17a50da name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.084194683Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f5e6bc27-cdcb-493f-a854-a3294f94803c name=/runtime.v1.ImageService/PullImage
	Oct 19 17:31:39 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:39.086991735Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.136107236Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f5e6bc27-cdcb-493f-a854-a3294f94803c name=/runtime.v1.ImageService/PullImage
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.140745756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d0bf2fc-6111-460d-be52-846b780feded name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.142612438Z" level=info msg="Creating container: default/busybox/busybox" id=0d0bb082-bbfe-473c-acdb-ca852ca70956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.143339794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.148680767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.149130481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.167026982Z" level=info msg="Created container af55f8152dfe692ccc751d09cd1427958bed0fd8ef8fd550d8c9c38d9c387082: default/busybox/busybox" id=0d0bb082-bbfe-473c-acdb-ca852ca70956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.168164002Z" level=info msg="Starting container: af55f8152dfe692ccc751d09cd1427958bed0fd8ef8fd550d8c9c38d9c387082" id=2b0c92b0-2973-4754-8adf-3a66a67d0078 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:31:41 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:41.171752954Z" level=info msg="Started container" PID=2008 containerID=af55f8152dfe692ccc751d09cd1427958bed0fd8ef8fd550d8c9c38d9c387082 description=default/busybox/busybox id=2b0c92b0-2973-4754-8adf-3a66a67d0078 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25
	Oct 19 17:31:47 old-k8s-version-125363 crio[843]: time="2025-10-19T17:31:47.881780117Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	af55f8152dfe6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   71049e9989fe4       busybox                                          default
	bf7a60b01ff16       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   946d0edb8cec1       coredns-5dd5756b68-28psj                         kube-system
	c688255c3aaaf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   0fc3bc097518c       storage-provisioner                              kube-system
	66710b950d481       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   57cbc4c27c297       kindnet-sgp8p                                    kube-system
	028770909ab91       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   b8f19a3b3da4d       kube-proxy-zjv4r                                 kube-system
	eba926affea4f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      50 seconds ago      Running             kube-scheduler            0                   15474a93627e7       kube-scheduler-old-k8s-version-125363            kube-system
	54c70a6ca0a5a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      50 seconds ago      Running             kube-controller-manager   0                   d1abd42e8632e       kube-controller-manager-old-k8s-version-125363   kube-system
	d92e4f200f6b1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      50 seconds ago      Running             kube-apiserver            0                   b7e3aab3241c5       kube-apiserver-old-k8s-version-125363            kube-system
	caf878c6f2888       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      50 seconds ago      Running             etcd                      0                   1fbde9ba686a1       etcd-old-k8s-version-125363                      kube-system
	
	
	==> coredns [bf7a60b01ff161e6884d13cee1a7a73fa471407d0fa2fff51d1054bc77a2a600] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60766 - 9069 "HINFO IN 7309694940559129082.3983817422213256632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003868636s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-125363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-125363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-125363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_31_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:31:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-125363
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:31:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:31:39 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:31:39 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:31:39 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:31:39 +0000   Sun, 19 Oct 2025 17:31:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-125363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ae1e6c1c-619e-4a12-af9f-474dab50c58c
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-28psj                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-125363                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-sgp8p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-125363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-125363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-zjv4r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-125363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-125363 event: Registered Node old-k8s-version-125363 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-125363 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 17:08] overlayfs: idmapped layers are currently not supported
	[  +0.231072] overlayfs: idmapped layers are currently not supported
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [caf878c6f28886e463f4eac3e27c4f5ddbbad412c8c7779d27f7979459ccb663] <==
	{"level":"info","ts":"2025-10-19T17:30:59.709502Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:30:59.70954Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:30:59.716987Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:30:59.717145Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:30:59.720175Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:30:59.72387Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:30:59.724059Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:31:00.092831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-19T17:31:00.092975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-19T17:31:00.093058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-19T17:31:00.093106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:31:00.09314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:31:00.093196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-19T17:31:00.093233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:31:00.100143Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-125363 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:31:00.100281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:31:00.101505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:31:00.106844Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:31:00.107965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:31:00.110161Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:31:00.110255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T17:31:00.122894Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:31:00.146943Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:31:00.147262Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:31:00.147304Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 17:31:50 up  1:14,  0 user,  load average: 3.30, 3.72, 3.29
	Linux old-k8s-version-125363 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66710b950d481287b9565b00b9d09477676b0f342e45ae0f13f46d2ab9fc81b3] <==
	I1019 17:31:24.796274       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:31:24.796558       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:31:24.796729       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:31:24.796746       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:31:24.796757       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:31:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:31:25.028161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:31:25.028246       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:31:25.028288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:31:25.028695       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:31:25.294645       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:31:25.294754       1 metrics.go:72] Registering metrics
	I1019 17:31:25.294869       1 controller.go:711] "Syncing nftables rules"
	I1019 17:31:35.032929       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:31:35.032991       1 main.go:301] handling current node
	I1019 17:31:45.033536       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:31:45.033581       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d92e4f200f6b1d2d3b974be4b2dc580e99594c5758e7aadd7a4727bb15a434ad] <==
	I1019 17:31:04.687118       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:31:04.699786       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:31:04.687417       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 17:31:04.700328       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:31:04.700361       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:31:04.700389       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:31:04.700418       1 cache.go:39] Caches are synced for autoregister controller
	E1019 17:31:04.856529       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	E1019 17:31:04.856723       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1019 17:31:05.069043       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:31:05.238422       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:31:05.253883       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:31:05.254083       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:31:06.315897       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:31:06.377435       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:31:06.437973       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:31:06.450348       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:31:06.451806       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 17:31:06.456728       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:31:06.555352       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:31:08.046093       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:31:08.073150       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:31:08.093241       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1019 17:31:21.002965       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 17:31:21.200446       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [54c70a6ca0a5a493f2ddbac0d1690296acbd1538fe30611bf64dd3f8989252a6] <==
	I1019 17:31:20.495859       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1019 17:31:20.497121       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1019 17:31:20.499585       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1019 17:31:20.552878       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 17:31:20.890346       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:31:20.942697       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:31:20.942737       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:31:21.008876       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1019 17:31:21.214889       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-sgp8p"
	I1019 17:31:21.226443       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zjv4r"
	I1019 17:31:21.361779       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8787l"
	I1019 17:31:21.377314       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-28psj"
	I1019 17:31:21.397433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="388.09633ms"
	I1019 17:31:21.412788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.300695ms"
	I1019 17:31:21.412895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.663µs"
	I1019 17:31:22.727666       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1019 17:31:22.760338       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8787l"
	I1019 17:31:22.807789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.259455ms"
	I1019 17:31:22.819247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.41614ms"
	I1019 17:31:22.819357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.432µs"
	I1019 17:31:35.443057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.166µs"
	I1019 17:31:35.469938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.326µs"
	I1019 17:31:36.553977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.802958ms"
	I1019 17:31:36.555727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.747µs"
	I1019 17:31:40.405622       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [028770909ab910d988cd83955dda1f5e7ff34101c3ff6f6551512643c476cf1c] <==
	I1019 17:31:21.846943       1 server_others.go:69] "Using iptables proxy"
	I1019 17:31:21.873611       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:31:21.933661       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:31:21.935992       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:31:21.936031       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:31:21.936039       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:31:21.936072       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:31:21.936248       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:31:21.936257       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:31:21.937669       1 config.go:188] "Starting service config controller"
	I1019 17:31:21.937683       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:31:21.937706       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:31:21.937723       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:31:21.938044       1 config.go:315] "Starting node config controller"
	I1019 17:31:21.938056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:31:22.038625       1 shared_informer.go:318] Caches are synced for node config
	I1019 17:31:22.038668       1 shared_informer.go:318] Caches are synced for service config
	I1019 17:31:22.038707       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eba926affea4faa75b274e50febf7b6e6993182f93e033999856affa64d5c9ae] <==
	W1019 17:31:04.835438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 17:31:04.835506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1019 17:31:04.840581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 17:31:04.840679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1019 17:31:04.840718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 17:31:04.840767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1019 17:31:05.597839       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 17:31:05.597975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 17:31:05.608521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 17:31:05.608646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1019 17:31:05.709517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 17:31:05.709634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1019 17:31:05.863421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 17:31:05.863519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 17:31:05.890256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 17:31:05.890368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1019 17:31:05.899372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 17:31:05.899472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1019 17:31:06.047447       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 17:31:06.048053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1019 17:31:06.075615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 17:31:06.075728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1019 17:31:06.294856       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 17:31:06.294971       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1019 17:31:09.134923       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.230648    1375 topology_manager.go:215] "Topology Admit Handler" podUID="f145e324-d5e7-4643-a624-fc7b3420f6c6" podNamespace="kube-system" podName="kube-proxy-zjv4r"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319662    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c027cd5-cea6-4170-860f-470cba905d64-xtables-lock\") pod \"kindnet-sgp8p\" (UID: \"0c027cd5-cea6-4170-860f-470cba905d64\") " pod="kube-system/kindnet-sgp8p"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319717    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f145e324-d5e7-4643-a624-fc7b3420f6c6-kube-proxy\") pod \"kube-proxy-zjv4r\" (UID: \"f145e324-d5e7-4643-a624-fc7b3420f6c6\") " pod="kube-system/kube-proxy-zjv4r"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319745    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f145e324-d5e7-4643-a624-fc7b3420f6c6-xtables-lock\") pod \"kube-proxy-zjv4r\" (UID: \"f145e324-d5e7-4643-a624-fc7b3420f6c6\") " pod="kube-system/kube-proxy-zjv4r"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319770    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfh6f\" (UniqueName: \"kubernetes.io/projected/f145e324-d5e7-4643-a624-fc7b3420f6c6-kube-api-access-cfh6f\") pod \"kube-proxy-zjv4r\" (UID: \"f145e324-d5e7-4643-a624-fc7b3420f6c6\") " pod="kube-system/kube-proxy-zjv4r"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319796    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c027cd5-cea6-4170-860f-470cba905d64-lib-modules\") pod \"kindnet-sgp8p\" (UID: \"0c027cd5-cea6-4170-860f-470cba905d64\") " pod="kube-system/kindnet-sgp8p"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319818    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995hx\" (UniqueName: \"kubernetes.io/projected/0c027cd5-cea6-4170-860f-470cba905d64-kube-api-access-995hx\") pod \"kindnet-sgp8p\" (UID: \"0c027cd5-cea6-4170-860f-470cba905d64\") " pod="kube-system/kindnet-sgp8p"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319842    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0c027cd5-cea6-4170-860f-470cba905d64-cni-cfg\") pod \"kindnet-sgp8p\" (UID: \"0c027cd5-cea6-4170-860f-470cba905d64\") " pod="kube-system/kindnet-sgp8p"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: I1019 17:31:21.319864    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f145e324-d5e7-4643-a624-fc7b3420f6c6-lib-modules\") pod \"kube-proxy-zjv4r\" (UID: \"f145e324-d5e7-4643-a624-fc7b3420f6c6\") " pod="kube-system/kube-proxy-zjv4r"
	Oct 19 17:31:21 old-k8s-version-125363 kubelet[1375]: W1019 17:31:21.545069    1375 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/crio-57cbc4c27c297189614625e77f1caa85cb02f66cc56ff5bfc12b25d01f17143c WatchSource:0}: Error finding container 57cbc4c27c297189614625e77f1caa85cb02f66cc56ff5bfc12b25d01f17143c: Status 404 returned error can't find the container with id 57cbc4c27c297189614625e77f1caa85cb02f66cc56ff5bfc12b25d01f17143c
	Oct 19 17:31:22 old-k8s-version-125363 kubelet[1375]: I1019 17:31:22.442517    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zjv4r" podStartSLOduration=1.442457602 podCreationTimestamp="2025-10-19 17:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:31:22.442120119 +0000 UTC m=+14.454307470" watchObservedRunningTime="2025-10-19 17:31:22.442457602 +0000 UTC m=+14.454644945"
	Oct 19 17:31:28 old-k8s-version-125363 kubelet[1375]: I1019 17:31:28.281136    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-sgp8p" podStartSLOduration=4.185213213 podCreationTimestamp="2025-10-19 17:31:21 +0000 UTC" firstStartedPulling="2025-10-19 17:31:21.550171872 +0000 UTC m=+13.562359215" lastFinishedPulling="2025-10-19 17:31:24.646048867 +0000 UTC m=+16.658236218" observedRunningTime="2025-10-19 17:31:25.464656255 +0000 UTC m=+17.476843606" watchObservedRunningTime="2025-10-19 17:31:28.281090216 +0000 UTC m=+20.293277567"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.392701    1375 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.437699    1375 topology_manager.go:215] "Topology Admit Handler" podUID="f627e140-a201-479b-9d5e-a9f9844ed7d3" podNamespace="kube-system" podName="coredns-5dd5756b68-28psj"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.447046    1375 topology_manager.go:215] "Topology Admit Handler" podUID="03c7a789-0ea1-4525-b93a-c70e9cbff9df" podNamespace="kube-system" podName="storage-provisioner"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.528099    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn6x\" (UniqueName: \"kubernetes.io/projected/03c7a789-0ea1-4525-b93a-c70e9cbff9df-kube-api-access-4fn6x\") pod \"storage-provisioner\" (UID: \"03c7a789-0ea1-4525-b93a-c70e9cbff9df\") " pod="kube-system/storage-provisioner"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.528306    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/03c7a789-0ea1-4525-b93a-c70e9cbff9df-tmp\") pod \"storage-provisioner\" (UID: \"03c7a789-0ea1-4525-b93a-c70e9cbff9df\") " pod="kube-system/storage-provisioner"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.528412    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f627e140-a201-479b-9d5e-a9f9844ed7d3-config-volume\") pod \"coredns-5dd5756b68-28psj\" (UID: \"f627e140-a201-479b-9d5e-a9f9844ed7d3\") " pod="kube-system/coredns-5dd5756b68-28psj"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: I1019 17:31:35.528509    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9m8q\" (UniqueName: \"kubernetes.io/projected/f627e140-a201-479b-9d5e-a9f9844ed7d3-kube-api-access-v9m8q\") pod \"coredns-5dd5756b68-28psj\" (UID: \"f627e140-a201-479b-9d5e-a9f9844ed7d3\") " pod="kube-system/coredns-5dd5756b68-28psj"
	Oct 19 17:31:35 old-k8s-version-125363 kubelet[1375]: W1019 17:31:35.810923    1375 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/crio-946d0edb8cec1baca5e80bd27461695e2a8ae0fe4bd2953f7e9d510723bbd435 WatchSource:0}: Error finding container 946d0edb8cec1baca5e80bd27461695e2a8ae0fe4bd2953f7e9d510723bbd435: Status 404 returned error can't find the container with id 946d0edb8cec1baca5e80bd27461695e2a8ae0fe4bd2953f7e9d510723bbd435
	Oct 19 17:31:36 old-k8s-version-125363 kubelet[1375]: I1019 17:31:36.527791    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.527741392 podCreationTimestamp="2025-10-19 17:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:31:36.512651272 +0000 UTC m=+28.524838631" watchObservedRunningTime="2025-10-19 17:31:36.527741392 +0000 UTC m=+28.539928743"
	Oct 19 17:31:38 old-k8s-version-125363 kubelet[1375]: I1019 17:31:38.718108    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-28psj" podStartSLOduration=17.718062839 podCreationTimestamp="2025-10-19 17:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:31:36.53182051 +0000 UTC m=+28.544007894" watchObservedRunningTime="2025-10-19 17:31:38.718062839 +0000 UTC m=+30.730250190"
	Oct 19 17:31:38 old-k8s-version-125363 kubelet[1375]: I1019 17:31:38.718302    1375 topology_manager.go:215] "Topology Admit Handler" podUID="619df5fa-7c94-408b-8f0c-3fa2d4f82639" podNamespace="default" podName="busybox"
	Oct 19 17:31:38 old-k8s-version-125363 kubelet[1375]: I1019 17:31:38.758271    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m522x\" (UniqueName: \"kubernetes.io/projected/619df5fa-7c94-408b-8f0c-3fa2d4f82639-kube-api-access-m522x\") pod \"busybox\" (UID: \"619df5fa-7c94-408b-8f0c-3fa2d4f82639\") " pod="default/busybox"
	Oct 19 17:31:39 old-k8s-version-125363 kubelet[1375]: W1019 17:31:39.071368    1375 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/crio-71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25 WatchSource:0}: Error finding container 71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25: Status 404 returned error can't find the container with id 71049e9989fe4da9d28f9d1ca88d62e03e27a702c7af7272908f62906513bb25
	
	
	==> storage-provisioner [c688255c3aaafc6373cfbb1e93973729cee4674f20e1a36a65f309b9e23c6288] <==
	I1019 17:31:35.861258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:31:35.885840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:31:35.885933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:31:35.907153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:31:35.910504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc89ac55-acf0-4d8e-a1f1-fca5e969b730", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-125363_987f7626-dcf8-46b2-8930-1d8e1511b94e became leader
	I1019 17:31:35.913269       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_987f7626-dcf8-46b2-8930-1d8e1511b94e!
	I1019 17:31:36.022953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_987f7626-dcf8-46b2-8930-1d8e1511b94e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-125363 -n old-k8s-version-125363
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-125363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.475695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-038781 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-038781 describe deploy/metrics-server -n kube-system: exit status 1 (81.668133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-038781 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-038781
helpers_test.go:243: (dbg) docker inspect no-preload-038781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	        "Created": "2025-10-19T17:31:51.406561575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:31:51.566681509Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hosts",
	        "LogPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc-json.log",
	        "Name": "/no-preload-038781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-038781:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-038781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	                "LowerDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-038781",
	                "Source": "/var/lib/docker/volumes/no-preload-038781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-038781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-038781",
	                "name.minikube.sigs.k8s.io": "no-preload-038781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1b09489df35ed6fc743bfd81d62e7c6b9d3fc10584639554563ae99c25399b3",
	            "SandboxKey": "/var/run/docker/netns/b1b09489df35",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-038781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:bc:f3:79:c6:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b07775101cd68c8ddd9de09f237af6ede6d8644dfb4bb5013ca32815c3f150a",
	                    "EndpointID": "32e7ee7debbfc76cfab3181078eec8f4527b867d0695020b8f36b418918efbc1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-038781",
	                        "4de6d765b1ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-038781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-038781 logs -n 25: (1.218298013s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-953581 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo docker system info                                                                                                                                                                                                      │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:32:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:32:05.705396  227579 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:32:05.705954  227579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:32:05.705989  227579 out.go:374] Setting ErrFile to fd 2...
	I1019 17:32:05.706009  227579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:32:05.706312  227579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:32:05.706789  227579 out.go:368] Setting JSON to false
	I1019 17:32:05.707765  227579 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4474,"bootTime":1760890652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:32:05.707866  227579 start.go:143] virtualization:  
	I1019 17:32:05.711313  227579 out.go:179] * [old-k8s-version-125363] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:32:05.715503  227579 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:32:05.715576  227579 notify.go:221] Checking for updates...
	I1019 17:32:05.721725  227579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:32:05.724829  227579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:05.727744  227579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:32:05.730660  227579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:32:05.734484  227579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:32:05.738025  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:05.741654  227579 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:32:05.744751  227579 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:32:05.788129  227579 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:32:05.788296  227579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:32:05.885509  227579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 17:32:05.876344573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:32:05.885605  227579 docker.go:319] overlay module found
	I1019 17:32:05.888753  227579 out.go:179] * Using the docker driver based on existing profile
	I1019 17:32:05.891597  227579 start.go:309] selected driver: docker
	I1019 17:32:05.891616  227579 start.go:930] validating driver "docker" against &{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:05.891714  227579 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:32:05.892404  227579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:32:05.986565  227579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 17:32:05.977132066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:32:05.986920  227579 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:05.986967  227579 cni.go:84] Creating CNI manager for ""
	I1019 17:32:05.987017  227579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:05.987052  227579 start.go:353] cluster config:
	{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:05.992003  227579 out.go:179] * Starting "old-k8s-version-125363" primary control-plane node in "old-k8s-version-125363" cluster
	I1019 17:32:05.995071  227579 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:32:06.007364  227579 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:32:06.010349  227579 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:32:06.010471  227579 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 17:32:06.010483  227579 cache.go:59] Caching tarball of preloaded images
	I1019 17:32:06.010663  227579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:32:06.011141  227579 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:32:06.011168  227579 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 17:32:06.011331  227579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:32:06.045794  227579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:32:06.045819  227579 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:32:06.045837  227579 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:32:06.045860  227579 start.go:360] acquireMachinesLock for old-k8s-version-125363: {Name:mkd08e65b205b510576dbfd42cd5fdbceaaa1817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:32:06.045929  227579 start.go:364] duration metric: took 48.247µs to acquireMachinesLock for "old-k8s-version-125363"
	I1019 17:32:06.045951  227579 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:32:06.045963  227579 fix.go:54] fixHost starting: 
	I1019 17:32:06.046242  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:06.075194  227579 fix.go:112] recreateIfNeeded on old-k8s-version-125363: state=Stopped err=<nil>
	W1019 17:32:06.075221  227579 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:32:05.411634  225032 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.505485853s)
	I1019 17:32:05.411655  225032 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.50566422s)
	I1019 17:32:05.411723  225032 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:05.411660  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1019 17:32:05.411801  225032 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:32:05.411827  225032 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:32:09.114937  225032 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.703085356s)
	I1019 17:32:09.114963  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1019 17:32:09.114990  225032 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.703250889s)
	I1019 17:32:09.115015  225032 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1019 17:32:09.115110  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.119735  225032 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1019 17:32:09.119779  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1019 17:32:09.200802  225032 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.200953  225032 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.822243  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 17:32:09.822280  225032 cache_images.go:125] Successfully loaded all cached images
	I1019 17:32:09.822287  225032 cache_images.go:94] duration metric: took 13.329116697s to LoadCachedImages
	I1019 17:32:09.822297  225032 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:32:09.822399  225032 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-038781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:32:09.822489  225032 ssh_runner.go:195] Run: crio config
	I1019 17:32:06.078291  227579 out.go:252] * Restarting existing docker container for "old-k8s-version-125363" ...
	I1019 17:32:06.078370  227579 cli_runner.go:164] Run: docker start old-k8s-version-125363
	I1019 17:32:06.394573  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:06.416143  227579 kic.go:430] container "old-k8s-version-125363" state is running.
	I1019 17:32:06.417276  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:06.441393  227579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:32:06.441700  227579 machine.go:94] provisionDockerMachine start ...
	I1019 17:32:06.441784  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:06.474828  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:06.475257  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:06.475268  227579 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:32:06.476051  227579 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:32:09.646113  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:32:09.646142  227579 ubuntu.go:182] provisioning hostname "old-k8s-version-125363"
	I1019 17:32:09.646212  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:09.666059  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:09.666358  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:09.666369  227579 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-125363 && echo "old-k8s-version-125363" | sudo tee /etc/hostname
	I1019 17:32:09.836621  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:32:09.836694  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:09.859859  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:09.860169  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:09.860187  227579 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-125363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-125363/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-125363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:32:10.041777  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:32:10.041799  227579 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:32:10.041817  227579 ubuntu.go:190] setting up certificates
	I1019 17:32:10.041827  227579 provision.go:84] configureAuth start
	I1019 17:32:10.041888  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:10.090664  227579 provision.go:143] copyHostCerts
	I1019 17:32:10.090729  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:32:10.090747  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:32:10.090829  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:32:10.090947  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:32:10.090952  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:32:10.090985  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:32:10.091044  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:32:10.091049  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:32:10.091074  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:32:10.091130  227579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-125363 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-125363]
	I1019 17:32:09.891797  225032 cni.go:84] Creating CNI manager for ""
	I1019 17:32:09.891822  225032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:09.891840  225032 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:32:09.891868  225032 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-038781 NodeName:no-preload-038781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:32:09.891995  225032 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-038781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:32:09.892066  225032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:32:09.901826  225032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1019 17:32:09.901899  225032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1019 17:32:09.912567  225032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1019 17:32:09.912654  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1019 17:32:09.913862  225032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1019 17:32:09.913944  225032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1019 17:32:09.919861  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1019 17:32:09.919895  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1019 17:32:10.945507  225032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:10.966028  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1019 17:32:10.974124  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1019 17:32:10.974165  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1019 17:32:11.320989  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1019 17:32:11.327912  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1019 17:32:11.332071  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1019 17:32:11.799676  225032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:32:11.810632  225032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:32:11.828521  225032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:32:11.845458  225032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 17:32:11.862567  225032 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:32:11.867493  225032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:11.879803  225032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:12.009314  225032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:12.030388  225032 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781 for IP: 192.168.76.2
	I1019 17:32:12.030459  225032 certs.go:195] generating shared ca certs ...
	I1019 17:32:12.030500  225032 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:12.030750  225032 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:32:12.030837  225032 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:32:12.030875  225032 certs.go:257] generating profile certs ...
	I1019 17:32:12.030978  225032 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key
	I1019 17:32:12.031034  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt with IP's: []
	I1019 17:32:13.050657  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt ...
	I1019 17:32:13.050730  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt: {Name:mk3f290cc4c355f70dccace558882b1a84846e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.050950  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key ...
	I1019 17:32:13.050984  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key: {Name:mk19b07416c5061089c7b6549b161a2b3570a3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.051124  225032 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d
	I1019 17:32:13.051159  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:32:13.331976  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d ...
	I1019 17:32:13.332009  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d: {Name:mkc0def6fd5a2512785b39750f1e37f96839be83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.332179  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d ...
	I1019 17:32:13.332195  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d: {Name:mk029fd686d0344ce1845ee5718bc0ff0b5ae626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.332269  225032 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt
	I1019 17:32:13.332351  225032 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key
	I1019 17:32:13.332414  225032 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key
	I1019 17:32:13.332433  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt with IP's: []
	I1019 17:32:14.130589  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt ...
	I1019 17:32:14.130620  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt: {Name:mk88d84623ca49934579a6025399288bc768dc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:14.130802  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key ...
	I1019 17:32:14.130816  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key: {Name:mk8bbb9dbb3136c32eb9a12263c10da7dd73b55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:14.131003  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:32:14.131051  225032 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:32:14.131064  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:32:14.131090  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:32:14.131118  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:32:14.131144  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:32:14.131191  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:14.131742  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:32:14.152114  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:32:14.173704  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:32:14.194107  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:32:14.214680  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:32:14.233722  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:32:14.256395  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:32:14.274204  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:32:14.300626  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:32:14.320472  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:32:14.339977  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:32:14.359398  225032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:32:14.372973  225032 ssh_runner.go:195] Run: openssl version
	I1019 17:32:14.379626  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:32:14.388363  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.392892  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.392966  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.434167  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:32:14.442995  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:32:14.451673  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.456124  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.456186  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.497639  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:32:14.506884  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:32:14.517169  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.521515  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.521631  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.562849  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:32:14.571447  225032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:32:14.575543  225032 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:32:14.575594  225032 kubeadm.go:401] StartCluster: {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:14.575665  225032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:32:14.575735  225032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:32:14.602660  225032 cri.go:89] found id: ""
	I1019 17:32:14.602771  225032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:32:14.611092  225032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:32:14.619281  225032 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:32:14.619380  225032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:32:14.627525  225032 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:32:14.627548  225032 kubeadm.go:158] found existing configuration files:
	
	I1019 17:32:14.627601  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:32:14.636052  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:32:14.636112  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:32:14.644413  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:32:14.652592  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:32:14.652682  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:32:14.660919  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:32:14.669388  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:32:14.669483  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:32:14.677898  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:32:14.686302  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:32:14.686469  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:32:14.694386  225032 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:32:14.734240  225032 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:32:14.734498  225032 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:32:14.756560  225032 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:32:14.756642  225032 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:32:14.756680  225032 kubeadm.go:319] OS: Linux
	I1019 17:32:14.756734  225032 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:32:14.756815  225032 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:32:14.756895  225032 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:32:14.756964  225032 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:32:14.757035  225032 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:32:14.757113  225032 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:32:14.757184  225032 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:32:14.757259  225032 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:32:14.757324  225032 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:32:14.827223  225032 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:32:14.827392  225032 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:32:14.827494  225032 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:32:14.842943  225032 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:32:11.507504  227579 provision.go:177] copyRemoteCerts
	I1019 17:32:11.507575  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:32:11.507613  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:11.539031  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:11.678091  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1019 17:32:11.721579  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:32:11.794758  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:32:11.815935  227579 provision.go:87] duration metric: took 1.774086155s to configureAuth
	I1019 17:32:11.815960  227579 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:32:11.816154  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:11.816276  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:11.834739  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:11.835038  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:11.835059  227579 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:32:12.246234  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:32:12.246259  227579 machine.go:97] duration metric: took 5.804541179s to provisionDockerMachine
	I1019 17:32:12.246270  227579 start.go:293] postStartSetup for "old-k8s-version-125363" (driver="docker")
	I1019 17:32:12.246281  227579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:32:12.246352  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:32:12.246393  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.269664  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.380077  227579 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:32:12.385691  227579 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:32:12.385729  227579 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:32:12.385741  227579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:32:12.385795  227579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:32:12.385880  227579 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:32:12.386010  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:32:12.394576  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:12.413964  227579 start.go:296] duration metric: took 167.679158ms for postStartSetup
	I1019 17:32:12.414055  227579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:32:12.414102  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.433575  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.536154  227579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:32:12.541156  227579 fix.go:56] duration metric: took 6.495186439s for fixHost
	I1019 17:32:12.541184  227579 start.go:83] releasing machines lock for "old-k8s-version-125363", held for 6.495243162s
	I1019 17:32:12.541253  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:12.559881  227579 ssh_runner.go:195] Run: cat /version.json
	I1019 17:32:12.559931  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.559942  227579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:32:12.560004  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.585653  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.600379  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.698771  227579 ssh_runner.go:195] Run: systemctl --version
	I1019 17:32:12.797926  227579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:32:12.867719  227579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:32:12.872380  227579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:32:12.872450  227579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:32:12.880895  227579 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:32:12.880923  227579 start.go:496] detecting cgroup driver to use...
	I1019 17:32:12.880954  227579 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:32:12.881006  227579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:32:12.896945  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:32:12.911001  227579 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:32:12.911066  227579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:32:12.927416  227579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:32:12.941298  227579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:32:13.098170  227579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:32:13.243767  227579 docker.go:234] disabling docker service ...
	I1019 17:32:13.243845  227579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:32:13.259097  227579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:32:13.272077  227579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:32:13.413509  227579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:32:13.565441  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:32:13.582329  227579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:32:13.611177  227579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 17:32:13.611248  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.637907  227579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:32:13.637981  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.657565  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.668095  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.677615  227579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:32:13.688720  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.701425  227579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.712704  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.724503  227579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:32:13.736628  227579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:32:13.744704  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:13.881213  227579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:32:15.191361  227579 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.310104867s)
	I1019 17:32:15.191395  227579 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:32:15.191453  227579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:32:15.195814  227579 start.go:564] Will wait 60s for crictl version
	I1019 17:32:15.195875  227579 ssh_runner.go:195] Run: which crictl
	I1019 17:32:15.200298  227579 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:32:15.229577  227579 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:32:15.229658  227579 ssh_runner.go:195] Run: crio --version
	I1019 17:32:15.262606  227579 ssh_runner.go:195] Run: crio --version
	I1019 17:32:15.300264  227579 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 17:32:15.303092  227579 cli_runner.go:164] Run: docker network inspect old-k8s-version-125363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:32:15.320188  227579 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:32:15.324689  227579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:15.335370  227579 kubeadm.go:884] updating cluster {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:32:15.335484  227579 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:32:15.335544  227579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:32:15.385033  227579 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:32:15.385055  227579 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:32:15.385110  227579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:32:15.419953  227579 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:32:15.419974  227579 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:32:15.419982  227579 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 17:32:15.420138  227579 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-125363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:32:15.420217  227579 ssh_runner.go:195] Run: crio config
	I1019 17:32:15.511601  227579 cni.go:84] Creating CNI manager for ""
	I1019 17:32:15.511625  227579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:15.511653  227579 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:32:15.511679  227579 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-125363 NodeName:old-k8s-version-125363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:32:15.511812  227579 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-125363"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:32:15.511882  227579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 17:32:15.521453  227579 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:32:15.521525  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:32:15.529962  227579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1019 17:32:15.544850  227579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:32:15.560107  227579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1019 17:32:15.575424  227579 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:32:15.579798  227579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:15.590473  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:15.730328  227579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:15.764200  227579 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363 for IP: 192.168.85.2
	I1019 17:32:15.764270  227579 certs.go:195] generating shared ca certs ...
	I1019 17:32:15.764300  227579 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:15.764480  227579 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:32:15.764572  227579 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:32:15.764612  227579 certs.go:257] generating profile certs ...
	I1019 17:32:15.764740  227579 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.key
	I1019 17:32:15.764899  227579 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795
	I1019 17:32:15.764979  227579 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key
	I1019 17:32:15.765132  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:32:15.765197  227579 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:32:15.765222  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:32:15.765284  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:32:15.765346  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:32:15.765407  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:32:15.765493  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:15.766911  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:32:15.798438  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:32:15.827844  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:32:15.866115  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:32:15.899727  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1019 17:32:15.921573  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:32:16.016055  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:32:16.076444  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:32:16.124821  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:32:16.145005  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:32:16.166635  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:32:16.186590  227579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:32:16.202059  227579 ssh_runner.go:195] Run: openssl version
	I1019 17:32:16.208586  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:32:16.218599  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.222736  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.222834  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.268070  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:32:16.281028  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:32:16.290340  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.294411  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.294529  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.337453  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:32:16.346217  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:32:16.355225  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.359457  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.359564  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.401315  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:32:16.422459  227579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:32:16.432235  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:32:16.539694  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:32:16.649782  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:32:16.738698  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:32:16.835512  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:32:16.936163  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:32:17.031513  227579 kubeadm.go:401] StartCluster: {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:17.031657  227579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:32:17.031746  227579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:32:17.132212  227579 cri.go:89] found id: "3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce"
	I1019 17:32:17.132282  227579 cri.go:89] found id: "d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934"
	I1019 17:32:17.132301  227579 cri.go:89] found id: "1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d"
	I1019 17:32:17.132321  227579 cri.go:89] found id: "197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50"
	I1019 17:32:17.132340  227579 cri.go:89] found id: ""
	I1019 17:32:17.132419  227579 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:32:17.155134  227579 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:32:17Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:32:17.155260  227579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:32:17.176031  227579 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:32:17.176100  227579 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:32:17.176167  227579 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:32:17.192737  227579 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:32:17.193257  227579 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-125363" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:17.193424  227579 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-125363" cluster setting kubeconfig missing "old-k8s-version-125363" context setting]
	I1019 17:32:17.193776  227579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.195492  227579 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:32:17.227146  227579 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:32:17.227226  227579 kubeadm.go:602] duration metric: took 51.106476ms to restartPrimaryControlPlane
	I1019 17:32:17.227250  227579 kubeadm.go:403] duration metric: took 195.74713ms to StartCluster
	I1019 17:32:17.227290  227579 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.227386  227579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:17.228112  227579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.228399  227579 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:32:17.228832  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:17.228795  227579 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:32:17.228955  227579 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-125363"
	I1019 17:32:17.228990  227579 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-125363"
	W1019 17:32:17.229042  227579 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:32:17.229077  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.229018  227579 addons.go:70] Setting dashboard=true in profile "old-k8s-version-125363"
	I1019 17:32:17.229295  227579 addons.go:239] Setting addon dashboard=true in "old-k8s-version-125363"
	W1019 17:32:17.229303  227579 addons.go:248] addon dashboard should already be in state true
	I1019 17:32:17.229320  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.230030  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.229024  227579 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-125363"
	I1019 17:32:17.230572  227579 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-125363"
	I1019 17:32:17.230873  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.231218  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.232221  227579 out.go:179] * Verifying Kubernetes components...
	I1019 17:32:17.236703  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:17.283274  227579 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-125363"
	W1019 17:32:17.283296  227579 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:32:17.283323  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.284131  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.290219  227579 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:32:17.293148  227579 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:17.297555  227579 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:17.297579  227579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:32:17.297646  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.297833  227579 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:32:14.878582  225032 out.go:252]   - Generating certificates and keys ...
	I1019 17:32:14.878751  225032 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:32:14.878849  225032 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:32:15.015803  225032 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:32:15.263209  225032 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:32:15.780959  225032 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:32:15.912356  225032 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:32:16.212911  225032 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:32:16.213182  225032 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-038781] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:32:16.296754  225032 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:32:16.297311  225032 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-038781] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:32:17.265690  225032 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:32:17.767259  225032 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:32:18.199030  225032 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:32:18.199565  225032 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:32:19.088169  225032 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:32:19.232788  225032 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:32:19.634904  225032 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:32:17.300682  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:32:17.300709  227579 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:32:17.300776  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.331351  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.356782  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.359237  227579 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:17.359257  227579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:32:17.359316  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.396929  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.656472  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:17.721224  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:32:17.721287  227579 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:32:17.732223  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:17.771811  227579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:17.780425  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:32:17.780450  227579 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:32:17.887241  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:32:17.887317  227579 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:32:18.022948  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:32:18.022968  227579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:32:18.222384  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:32:18.222405  227579 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:32:18.275707  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:32:18.275733  227579 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:32:18.322815  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:32:18.322840  227579 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:32:18.356587  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:32:18.356616  227579 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:32:18.390219  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:32:18.390243  227579 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:32:18.424352  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:32:20.700286  225032 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:32:21.385008  225032 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:32:21.385109  225032 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:32:21.389103  225032 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:32:21.392804  225032 out.go:252]   - Booting up control plane ...
	I1019 17:32:21.392921  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:32:21.393003  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:32:21.393087  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:32:21.414512  225032 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:32:21.414665  225032 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:32:21.430807  225032 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:32:21.431137  225032 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:32:21.431348  225032 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:32:21.649220  225032 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:32:21.649356  225032 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:32:23.649588  225032 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.0014623s
	I1019 17:32:23.653182  225032 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:32:23.653285  225032 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 17:32:23.653594  225032 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:32:23.653684  225032 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:32:26.842339  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.110040041s)
	I1019 17:32:26.842745  227579 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.070874324s)
	I1019 17:32:26.842785  227579 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:32:26.843867  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.187324753s)
	I1019 17:32:26.913207  227579 node_ready.go:49] node "old-k8s-version-125363" is "Ready"
	I1019 17:32:26.913236  227579 node_ready.go:38] duration metric: took 70.432316ms for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:32:26.913250  227579 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:32:26.913335  227579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:32:28.088459  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.664061923s)
	I1019 17:32:28.088639  227579 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.175286819s)
	I1019 17:32:28.088660  227579 api_server.go:72] duration metric: took 10.860200373s to wait for apiserver process to appear ...
	I1019 17:32:28.088667  227579 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:32:28.088690  227579 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:32:28.091692  227579 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-125363 addons enable metrics-server
	
	I1019 17:32:28.094875  227579 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:32:28.488681  225032 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.83495401s
	I1019 17:32:28.098010  227579 addons.go:515] duration metric: took 10.869175813s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 17:32:28.107591  227579 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:32:28.109290  227579 api_server.go:141] control plane version: v1.28.0
	I1019 17:32:28.109321  227579 api_server.go:131] duration metric: took 20.643587ms to wait for apiserver health ...
	I1019 17:32:28.109330  227579 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:32:28.114837  227579 system_pods.go:59] 8 kube-system pods found
	I1019 17:32:28.114879  227579 system_pods.go:61] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:28.114888  227579 system_pods.go:61] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:32:28.114895  227579 system_pods.go:61] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:32:28.114902  227579 system_pods.go:61] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:32:28.114909  227579 system_pods.go:61] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:32:28.114919  227579 system_pods.go:61] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:32:28.114928  227579 system_pods.go:61] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:32:28.114938  227579 system_pods.go:61] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Running
	I1019 17:32:28.114948  227579 system_pods.go:74] duration metric: took 5.608477ms to wait for pod list to return data ...
	I1019 17:32:28.114962  227579 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:32:28.118920  227579 default_sa.go:45] found service account: "default"
	I1019 17:32:28.118949  227579 default_sa.go:55] duration metric: took 3.980159ms for default service account to be created ...
	I1019 17:32:28.118968  227579 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:32:28.127294  227579 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:28.127330  227579 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:28.127342  227579 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:32:28.127347  227579 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:32:28.127362  227579 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:32:28.127374  227579 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:32:28.127384  227579 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:32:28.127391  227579 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:32:28.127396  227579 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Running
	I1019 17:32:28.127409  227579 system_pods.go:126] duration metric: took 8.4346ms to wait for k8s-apps to be running ...
	I1019 17:32:28.127418  227579 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:32:28.127487  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:28.160542  227579 system_svc.go:56] duration metric: took 33.104025ms WaitForService to wait for kubelet
	I1019 17:32:28.160577  227579 kubeadm.go:587] duration metric: took 10.932111136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:28.160597  227579 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:32:28.166958  227579 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:32:28.166994  227579 node_conditions.go:123] node cpu capacity is 2
	I1019 17:32:28.167016  227579 node_conditions.go:105] duration metric: took 6.413619ms to run NodePressure ...
	I1019 17:32:28.167030  227579 start.go:242] waiting for startup goroutines ...
	I1019 17:32:28.167037  227579 start.go:247] waiting for cluster config update ...
	I1019 17:32:28.167052  227579 start.go:256] writing updated cluster config ...
	I1019 17:32:28.167431  227579 ssh_runner.go:195] Run: rm -f paused
	I1019 17:32:28.177206  227579 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:28.182304  227579 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:32:30.190015  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:30.033903  225032 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.380634634s
	I1019 17:32:31.655158  225032 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.001875799s
	I1019 17:32:31.674820  225032 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:32:31.699034  225032 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:32:31.717615  225032 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:32:31.717838  225032 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-038781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:32:31.731499  225032 kubeadm.go:319] [bootstrap-token] Using token: 69inx9.8tqqthy2gltoq5cz
	I1019 17:32:31.734660  225032 out.go:252]   - Configuring RBAC rules ...
	I1019 17:32:31.734790  225032 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:32:31.739330  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:32:31.748875  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:32:31.754296  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:32:31.760722  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:32:31.765404  225032 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:32:32.064131  225032 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:32:32.510182  225032 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:32:33.063962  225032 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:32:33.065317  225032 kubeadm.go:319] 
	I1019 17:32:33.065399  225032 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:32:33.065410  225032 kubeadm.go:319] 
	I1019 17:32:33.065492  225032 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:32:33.065500  225032 kubeadm.go:319] 
	I1019 17:32:33.065526  225032 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:32:33.065593  225032 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:32:33.065652  225032 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:32:33.065661  225032 kubeadm.go:319] 
	I1019 17:32:33.065725  225032 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:32:33.065735  225032 kubeadm.go:319] 
	I1019 17:32:33.065790  225032 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:32:33.065798  225032 kubeadm.go:319] 
	I1019 17:32:33.065852  225032 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:32:33.065943  225032 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:32:33.066017  225032 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:32:33.066026  225032 kubeadm.go:319] 
	I1019 17:32:33.066119  225032 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:32:33.066204  225032 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:32:33.066213  225032 kubeadm.go:319] 
	I1019 17:32:33.066300  225032 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 69inx9.8tqqthy2gltoq5cz \
	I1019 17:32:33.066414  225032 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:32:33.066439  225032 kubeadm.go:319] 	--control-plane 
	I1019 17:32:33.066447  225032 kubeadm.go:319] 
	I1019 17:32:33.066563  225032 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:32:33.066579  225032 kubeadm.go:319] 
	I1019 17:32:33.066675  225032 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 69inx9.8tqqthy2gltoq5cz \
	I1019 17:32:33.066786  225032 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:32:33.071106  225032 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:32:33.071352  225032 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:32:33.071467  225032 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:32:33.071488  225032 cni.go:84] Creating CNI manager for ""
	I1019 17:32:33.071496  225032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:33.074624  225032 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:32:33.077685  225032 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:32:33.082593  225032 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:32:33.082624  225032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:32:33.099083  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:32:33.433160  225032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:32:33.433284  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:33.433348  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-038781 minikube.k8s.io/updated_at=2025_10_19T17_32_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=no-preload-038781 minikube.k8s.io/primary=true
	I1019 17:32:33.580866  225032 ops.go:34] apiserver oom_adj: -16
	I1019 17:32:33.581044  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:34.081250  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:34.581410  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 17:32:32.688570  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:35.188993  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:35.081691  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:35.581084  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:36.081306  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:36.581546  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:37.081164  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:37.581412  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:38.081451  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:38.224307  225032 kubeadm.go:1114] duration metric: took 4.791066656s to wait for elevateKubeSystemPrivileges
	I1019 17:32:38.224333  225032 kubeadm.go:403] duration metric: took 23.648743694s to StartCluster
	I1019 17:32:38.224350  225032 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:38.224417  225032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:38.225374  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:38.225594  225032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:32:38.225748  225032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:32:38.226006  225032 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:32:38.225975  225032 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:32:38.226063  225032 addons.go:70] Setting storage-provisioner=true in profile "no-preload-038781"
	I1019 17:32:38.226071  225032 addons.go:70] Setting default-storageclass=true in profile "no-preload-038781"
	I1019 17:32:38.226082  225032 addons.go:239] Setting addon storage-provisioner=true in "no-preload-038781"
	I1019 17:32:38.226086  225032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-038781"
	I1019 17:32:38.226106  225032 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:32:38.226405  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.226667  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.229683  225032 out.go:179] * Verifying Kubernetes components...
	I1019 17:32:38.232671  225032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:38.269323  225032 addons.go:239] Setting addon default-storageclass=true in "no-preload-038781"
	I1019 17:32:38.269363  225032 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:32:38.269769  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.271613  225032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:38.275181  225032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:38.275213  225032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:32:38.275281  225032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:32:38.310603  225032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:38.310641  225032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:32:38.310704  225032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:32:38.344728  225032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:32:38.363633  225032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:32:38.600115  225032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:32:38.600215  225032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:38.661855  225032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:38.733609  225032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:39.681837  225032 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.081588956s)
	I1019 17:32:39.682865  225032 node_ready.go:35] waiting up to 6m0s for node "no-preload-038781" to be "Ready" ...
	I1019 17:32:39.683198  225032 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.083052749s)
	I1019 17:32:39.683910  225032 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 17:32:39.683346  225032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021464403s)
	I1019 17:32:40.191104  225032 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-038781" context rescaled to 1 replicas
	I1019 17:32:40.224208  225032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490562172s)
	I1019 17:32:40.235074  225032 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1019 17:32:37.697012  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:40.191595  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:40.238286  225032 addons.go:515] duration metric: took 2.012295734s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1019 17:32:41.691278  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:44.186302  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:42.693998  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:45.191447  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:46.687641  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:49.185636  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:47.690052  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:49.693160  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:51.185699  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:53.188322  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	I1019 17:32:54.186126  225032 node_ready.go:49] node "no-preload-038781" is "Ready"
	I1019 17:32:54.186167  225032 node_ready.go:38] duration metric: took 14.50324163s for node "no-preload-038781" to be "Ready" ...
	I1019 17:32:54.186181  225032 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:32:54.186261  225032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:32:54.212132  225032 api_server.go:72] duration metric: took 15.986507353s to wait for apiserver process to appear ...
	I1019 17:32:54.212203  225032 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:32:54.212236  225032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:32:54.221237  225032 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:32:54.222515  225032 api_server.go:141] control plane version: v1.34.1
	I1019 17:32:54.222583  225032 api_server.go:131] duration metric: took 10.36033ms to wait for apiserver health ...
	I1019 17:32:54.222592  225032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:32:54.227900  225032 system_pods.go:59] 8 kube-system pods found
	I1019 17:32:54.227941  225032 system_pods.go:61] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.227949  225032 system_pods.go:61] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.227956  225032 system_pods.go:61] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.227961  225032 system_pods.go:61] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.227969  225032 system_pods.go:61] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.227973  225032 system_pods.go:61] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.227978  225032 system_pods.go:61] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.227985  225032 system_pods.go:61] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.227997  225032 system_pods.go:74] duration metric: took 5.398708ms to wait for pod list to return data ...
	I1019 17:32:54.228009  225032 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:32:54.231472  225032 default_sa.go:45] found service account: "default"
	I1019 17:32:54.231500  225032 default_sa.go:55] duration metric: took 3.483207ms for default service account to be created ...
	I1019 17:32:54.231511  225032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:32:54.234356  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.234392  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.234401  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.234408  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.234412  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.234417  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.234420  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.234425  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.234433  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.234453  225032 retry.go:31] will retry after 216.070278ms: missing components: kube-dns
	I1019 17:32:54.454950  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.454987  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.454994  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.455000  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.455005  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.455010  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.455014  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.455018  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.455026  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.455052  225032 retry.go:31] will retry after 272.670908ms: missing components: kube-dns
	I1019 17:32:54.732924  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.732971  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.733000  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.733010  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.733015  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.733021  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.733033  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.733037  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.733041  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:32:54.733050  225032 system_pods.go:126] duration metric: took 501.532253ms to wait for k8s-apps to be running ...
	I1019 17:32:54.733065  225032 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:32:54.733127  225032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:54.753410  225032 system_svc.go:56] duration metric: took 20.334398ms WaitForService to wait for kubelet
	I1019 17:32:54.753436  225032 kubeadm.go:587] duration metric: took 16.527818097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:54.753456  225032 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:32:54.758610  225032 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:32:54.758644  225032 node_conditions.go:123] node cpu capacity is 2
	I1019 17:32:54.758656  225032 node_conditions.go:105] duration metric: took 5.194389ms to run NodePressure ...
	I1019 17:32:54.758668  225032 start.go:242] waiting for startup goroutines ...
	I1019 17:32:54.758676  225032 start.go:247] waiting for cluster config update ...
	I1019 17:32:54.758687  225032 start.go:256] writing updated cluster config ...
	I1019 17:32:54.758983  225032 ssh_runner.go:195] Run: rm -f paused
	I1019 17:32:54.765558  225032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:54.769520  225032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:32:52.188515  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:54.189820  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:55.775637  225032 pod_ready.go:94] pod "coredns-66bc5c9577-6k8tn" is "Ready"
	I1019 17:32:55.775669  225032 pod_ready.go:86] duration metric: took 1.006121735s for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.779353  225032 pod_ready.go:83] waiting for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.783990  225032 pod_ready.go:94] pod "etcd-no-preload-038781" is "Ready"
	I1019 17:32:55.784011  225032 pod_ready.go:86] duration metric: took 4.632607ms for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.786609  225032 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.791192  225032 pod_ready.go:94] pod "kube-apiserver-no-preload-038781" is "Ready"
	I1019 17:32:55.791219  225032 pod_ready.go:86] duration metric: took 4.582892ms for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.793764  225032 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.973237  225032 pod_ready.go:94] pod "kube-controller-manager-no-preload-038781" is "Ready"
	I1019 17:32:55.973266  225032 pod_ready.go:86] duration metric: took 179.468167ms for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.173415  225032 pod_ready.go:83] waiting for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.573268  225032 pod_ready.go:94] pod "kube-proxy-2n5k9" is "Ready"
	I1019 17:32:56.573298  225032 pod_ready.go:86] duration metric: took 399.85069ms for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.773670  225032 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:57.173012  225032 pod_ready.go:94] pod "kube-scheduler-no-preload-038781" is "Ready"
	I1019 17:32:57.173080  225032 pod_ready.go:86] duration metric: took 399.379337ms for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:57.173101  225032 pod_ready.go:40] duration metric: took 2.407509578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:57.231384  225032 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:32:57.234760  225032 out.go:179] * Done! kubectl is now configured to use "no-preload-038781" cluster and "default" namespace by default
	W1019 17:32:56.688450  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:59.187911  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:33:01.188228  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:33:03.688022  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:33:04.688587  227579 pod_ready.go:94] pod "coredns-5dd5756b68-28psj" is "Ready"
	I1019 17:33:04.688617  227579 pod_ready.go:86] duration metric: took 36.506285459s for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.691745  227579 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.696870  227579 pod_ready.go:94] pod "etcd-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.696940  227579 pod_ready.go:86] duration metric: took 5.167573ms for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.699998  227579 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.704998  227579 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.705026  227579 pod_ready.go:86] duration metric: took 4.999456ms for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.708435  227579 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.886276  227579 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.886303  227579 pod_ready.go:86] duration metric: took 177.843349ms for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.087697  227579 pod_ready.go:83] waiting for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.486874  227579 pod_ready.go:94] pod "kube-proxy-zjv4r" is "Ready"
	I1019 17:33:05.486902  227579 pod_ready.go:86] duration metric: took 399.171766ms for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.687248  227579 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:06.086988  227579 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-125363" is "Ready"
	I1019 17:33:06.087016  227579 pod_ready.go:86] duration metric: took 399.741727ms for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:06.087031  227579 pod_ready.go:40] duration metric: took 37.909788745s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:33:06.141064  227579 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1019 17:33:06.144605  227579 out.go:203] 
	W1019 17:33:06.147952  227579 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 17:33:06.151417  227579 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 17:33:06.154368  227579 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-125363" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:32:54 no-preload-038781 crio[836]: time="2025-10-19T17:32:54.311795152Z" level=info msg="Created container 39bd287555337b74215fce7da98aefdc8d6bb3ef626b88d769d9ad778ad49d72: kube-system/coredns-66bc5c9577-6k8tn/coredns" id=49752974-de40-4259-b3ab-44d03ff1087d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:32:54 no-preload-038781 crio[836]: time="2025-10-19T17:32:54.319675336Z" level=info msg="Starting container: 39bd287555337b74215fce7da98aefdc8d6bb3ef626b88d769d9ad778ad49d72" id=7316c7be-1070-4d98-8911-6a18c81cd0f2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:32:54 no-preload-038781 crio[836]: time="2025-10-19T17:32:54.323016509Z" level=info msg="Started container" PID=2479 containerID=39bd287555337b74215fce7da98aefdc8d6bb3ef626b88d769d9ad778ad49d72 description=kube-system/coredns-66bc5c9577-6k8tn/coredns id=7316c7be-1070-4d98-8911-6a18c81cd0f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6dae83f9b433ed21ad2edb1005b261afcd9c0db4b49d0d8586529f72eaf9e33
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.765045089Z" level=info msg="Running pod sandbox: default/busybox/POD" id=eda875ae-0557-4d7f-b039-4e07d40c43c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.76511613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.77059639Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3aab93fb6e213c1af96fa1fbcae634f6ab454b1ab81e7d417a1ac6f867813ddb UID:e72c8cf5-0aa2-449f-9383-3dc04b70f634 NetNS:/var/run/netns/27867c70-3499-486e-9106-eb546cd9a5e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db68}] Aliases:map[]}"
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.770772541Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.783044704Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3aab93fb6e213c1af96fa1fbcae634f6ab454b1ab81e7d417a1ac6f867813ddb UID:e72c8cf5-0aa2-449f-9383-3dc04b70f634 NetNS:/var/run/netns/27867c70-3499-486e-9106-eb546cd9a5e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db68}] Aliases:map[]}"
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.783421867Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.786332445Z" level=info msg="Ran pod sandbox 3aab93fb6e213c1af96fa1fbcae634f6ab454b1ab81e7d417a1ac6f867813ddb with infra container: default/busybox/POD" id=eda875ae-0557-4d7f-b039-4e07d40c43c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.789305013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ddebc1bb-a2e4-43f6-8eda-ee4412b8d806 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.789743125Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ddebc1bb-a2e4-43f6-8eda-ee4412b8d806 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.789912686Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ddebc1bb-a2e4-43f6-8eda-ee4412b8d806 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.792596126Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f9f9a3e9-2702-49d5-a1b7-f4376dda7834 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:32:57 no-preload-038781 crio[836]: time="2025-10-19T17:32:57.794943626Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.829986714Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f9f9a3e9-2702-49d5-a1b7-f4376dda7834 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.830649994Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ce4931d8-e0d7-44b2-b7f7-7fe5848b1d12 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.84077251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c490ed4-5abe-4ad0-a72b-23b359d3693f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.846496802Z" level=info msg="Creating container: default/busybox/busybox" id=20c4d9b0-bc94-4106-b01c-c92c145303cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.84731444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.851999971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.852543561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.867712739Z" level=info msg="Created container 9bc4ff3074f56830d7fb6ad1a50ed2b64173c84410359f72191e3b310008119d: default/busybox/busybox" id=20c4d9b0-bc94-4106-b01c-c92c145303cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.872967928Z" level=info msg="Starting container: 9bc4ff3074f56830d7fb6ad1a50ed2b64173c84410359f72191e3b310008119d" id=a7cf5a80-e245-448e-a46e-6667caf64730 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:32:59 no-preload-038781 crio[836]: time="2025-10-19T17:32:59.879025838Z" level=info msg="Started container" PID=2532 containerID=9bc4ff3074f56830d7fb6ad1a50ed2b64173c84410359f72191e3b310008119d description=default/busybox/busybox id=a7cf5a80-e245-448e-a46e-6667caf64730 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3aab93fb6e213c1af96fa1fbcae634f6ab454b1ab81e7d417a1ac6f867813ddb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9bc4ff3074f56       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   3aab93fb6e213       busybox                                     default
	39bd287555337       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   c6dae83f9b433       coredns-66bc5c9577-6k8tn                    kube-system
	50b05889171a8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   73cfc4e7ef2d2       storage-provisioner                         kube-system
	7b604f080f82f       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   1d2f77f0c2fd6       kindnet-t6qjz                               kube-system
	cb95a588fbb46       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   5cd379c1819c4       kube-proxy-2n5k9                            kube-system
	feecc8509d281       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      43 seconds ago      Running             kube-apiserver            0                   36d83b6be2b3f       kube-apiserver-no-preload-038781            kube-system
	69c73f138d558       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      43 seconds ago      Running             kube-scheduler            0                   558af9603228c       kube-scheduler-no-preload-038781            kube-system
	3305d795dd830       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      43 seconds ago      Running             etcd                      0                   63bd7eeff6af9       etcd-no-preload-038781                      kube-system
	a2fbcd834b272       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      43 seconds ago      Running             kube-controller-manager   0                   d2756cd146f55       kube-controller-manager-no-preload-038781   kube-system
	
	
	==> coredns [39bd287555337b74215fce7da98aefdc8d6bb3ef626b88d769d9ad778ad49d72] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34928 - 28635 "HINFO IN 7012736573511495922.1076919584518081120. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01465868s
	
	
	==> describe nodes <==
	Name:               no-preload-038781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-038781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-038781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:32:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-038781
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:33:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:33:03 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:33:03 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:33:03 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:33:03 +0000   Sun, 19 Oct 2025 17:32:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-038781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f7908916-dc6b-4011-8ad7-c40cd54a41fa
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6k8tn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-038781                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-t6qjz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-038781             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-038781    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-2n5k9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-038781             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-038781 event: Registered Node no-preload-038781 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-038781 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3305d795dd8307dd88b920ed8556f58186c6291ca2f3ca3b03ce4cc7ba7eb980] <==
	{"level":"warn","ts":"2025-10-19T17:32:28.603034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.622626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.636614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.679328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.690456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.708518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.724834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.746351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.757954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.781109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.795939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.816180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.840171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.856495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.869704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.893017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.906374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.929017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.940873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.956310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:28.991292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:29.011256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:29.036436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:29.094462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:32:29.158230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43592","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:33:08 up  1:15,  0 user,  load average: 4.26, 4.02, 3.44
	Linux no-preload-038781 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b604f080f82fed55e9d7266a8f832285bc2bb381fe2904f1b806fd0eace2f69] <==
	I1019 17:32:43.499326       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:32:43.499811       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:32:43.499977       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:32:43.500017       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:32:43.500057       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:32:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:32:43.701341       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:32:43.701410       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:32:43.701443       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:32:43.702455       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 17:32:43.994663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:32:43.994763       1 metrics.go:72] Registering metrics
	I1019 17:32:43.994862       1 controller.go:711] "Syncing nftables rules"
	I1019 17:32:53.701789       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:32:53.701829       1 main.go:301] handling current node
	I1019 17:33:03.701468       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:33:03.701525       1 main.go:301] handling current node
	
	
	==> kube-apiserver [feecc8509d28145ff5b8a16cbbd1add4c9169308cfc63f9aac989e590026b9bc] <==
	I1019 17:32:30.081525       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:32:30.081566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:32:30.081598       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:32:30.093936       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:32:30.094333       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:32:30.118421       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:32:30.118522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:32:30.131009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:32:30.778215       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:32:30.784723       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:32:30.784748       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:32:31.505854       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:32:31.565683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:32:31.686884       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:32:31.705046       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 17:32:31.706082       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:32:31.715722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:32:31.922931       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:32:32.475591       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:32:32.508235       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:32:32.537409       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:32:37.429071       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:32:37.580996       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:32:37.587725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:32:37.927516       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a2fbcd834b272c53da813ad0781d705749363240579c83df98fd2fb7061c138c] <==
	I1019 17:32:36.956266       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:32:36.962636       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:32:36.970642       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:32:36.970708       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:32:36.970756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:32:36.970823       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:32:36.970869       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:32:36.970899       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:32:36.971247       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:32:36.971321       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-038781"
	I1019 17:32:36.971358       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:32:36.971588       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:32:36.972509       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:32:36.972583       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:32:36.972626       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:32:36.972824       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:32:36.972910       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:32:36.973059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:32:36.973965       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:32:36.974021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:32:36.974045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:32:36.974140       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:32:36.975326       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:32:36.980724       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:32:56.974644       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cb95a588fbb46ebae8eeaa17f671d713104d44b4ab24f071a9c1092ba092b0ee] <==
	I1019 17:32:38.643116       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:32:38.732450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:32:38.833595       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:32:38.833627       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:32:38.833714       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:32:38.873671       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:32:38.873724       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:32:38.878155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:32:38.878475       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:32:38.878501       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:32:38.879879       1 config.go:200] "Starting service config controller"
	I1019 17:32:38.879901       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:32:38.879917       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:32:38.879922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:32:38.879933       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:32:38.879937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:32:38.880538       1 config.go:309] "Starting node config controller"
	I1019 17:32:38.880557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:32:38.880564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:32:38.980777       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:32:38.980816       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:32:38.980870       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69c73f138d558b3c37251c861f0b814640905ac712efe82dcd1c35af726e02a4] <==
	E1019 17:32:30.066546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:32:30.066750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:32:30.066754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:32:30.066918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:32:30.066936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:32:30.067016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:32:30.067072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:32:30.067153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:32:30.067313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:32:30.067594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:32:30.067785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:32:30.067959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:32:30.068014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:32:30.875547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:32:30.923286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:32:30.960118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 17:32:30.975654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:32:31.020539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:32:31.109233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:32:31.132958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:32:31.205683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:32:31.209276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:32:31.274951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:32:31.276614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1019 17:32:32.909191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:32:33 no-preload-038781 kubelet[1981]: I1019 17:32:33.729251    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-038781" podStartSLOduration=1.7292310400000002 podStartE2EDuration="1.72923104s" podCreationTimestamp="2025-10-19 17:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:32:33.694774045 +0000 UTC m=+1.351921897" watchObservedRunningTime="2025-10-19 17:32:33.72923104 +0000 UTC m=+1.386378892"
	Oct 19 17:32:33 no-preload-038781 kubelet[1981]: I1019 17:32:33.759011    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-038781" podStartSLOduration=1.758991828 podStartE2EDuration="1.758991828s" podCreationTimestamp="2025-10-19 17:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:32:33.729558176 +0000 UTC m=+1.386706028" watchObservedRunningTime="2025-10-19 17:32:33.758991828 +0000 UTC m=+1.416139680"
	Oct 19 17:32:36 no-preload-038781 kubelet[1981]: I1019 17:32:36.933331    1981 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:32:36 no-preload-038781 kubelet[1981]: I1019 17:32:36.933942    1981 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.000939    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/571f6c31-a383-4d1f-ba97-b0ab16c1b537-kube-proxy\") pod \"kube-proxy-2n5k9\" (UID: \"571f6c31-a383-4d1f-ba97-b0ab16c1b537\") " pod="kube-system/kube-proxy-2n5k9"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.001631    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/571f6c31-a383-4d1f-ba97-b0ab16c1b537-xtables-lock\") pod \"kube-proxy-2n5k9\" (UID: \"571f6c31-a383-4d1f-ba97-b0ab16c1b537\") " pod="kube-system/kube-proxy-2n5k9"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.001768    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/571f6c31-a383-4d1f-ba97-b0ab16c1b537-lib-modules\") pod \"kube-proxy-2n5k9\" (UID: \"571f6c31-a383-4d1f-ba97-b0ab16c1b537\") " pod="kube-system/kube-proxy-2n5k9"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.001863    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75c3af5d-0b86-49c0-8c67-355e94a238e9-cni-cfg\") pod \"kindnet-t6qjz\" (UID: \"75c3af5d-0b86-49c0-8c67-355e94a238e9\") " pod="kube-system/kindnet-t6qjz"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.001960    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bqf\" (UniqueName: \"kubernetes.io/projected/75c3af5d-0b86-49c0-8c67-355e94a238e9-kube-api-access-s6bqf\") pod \"kindnet-t6qjz\" (UID: \"75c3af5d-0b86-49c0-8c67-355e94a238e9\") " pod="kube-system/kindnet-t6qjz"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.002069    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c3af5d-0b86-49c0-8c67-355e94a238e9-lib-modules\") pod \"kindnet-t6qjz\" (UID: \"75c3af5d-0b86-49c0-8c67-355e94a238e9\") " pod="kube-system/kindnet-t6qjz"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.002167    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrtj\" (UniqueName: \"kubernetes.io/projected/571f6c31-a383-4d1f-ba97-b0ab16c1b537-kube-api-access-vfrtj\") pod \"kube-proxy-2n5k9\" (UID: \"571f6c31-a383-4d1f-ba97-b0ab16c1b537\") " pod="kube-system/kube-proxy-2n5k9"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.002264    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75c3af5d-0b86-49c0-8c67-355e94a238e9-xtables-lock\") pod \"kindnet-t6qjz\" (UID: \"75c3af5d-0b86-49c0-8c67-355e94a238e9\") " pod="kube-system/kindnet-t6qjz"
	Oct 19 17:32:38 no-preload-038781 kubelet[1981]: I1019 17:32:38.116038    1981 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:32:40 no-preload-038781 kubelet[1981]: I1019 17:32:40.576103    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2n5k9" podStartSLOduration=3.576086177 podStartE2EDuration="3.576086177s" podCreationTimestamp="2025-10-19 17:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:32:39.605245588 +0000 UTC m=+7.262393432" watchObservedRunningTime="2025-10-19 17:32:40.576086177 +0000 UTC m=+8.233234021"
	Oct 19 17:32:43 no-preload-038781 kubelet[1981]: I1019 17:32:43.897449    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-t6qjz" podStartSLOduration=1.907312234 podStartE2EDuration="6.897427139s" podCreationTimestamp="2025-10-19 17:32:37 +0000 UTC" firstStartedPulling="2025-10-19 17:32:38.329170189 +0000 UTC m=+5.986318041" lastFinishedPulling="2025-10-19 17:32:43.319285094 +0000 UTC m=+10.976432946" observedRunningTime="2025-10-19 17:32:43.646824668 +0000 UTC m=+11.303972528" watchObservedRunningTime="2025-10-19 17:32:43.897427139 +0000 UTC m=+11.554574991"
	Oct 19 17:32:53 no-preload-038781 kubelet[1981]: I1019 17:32:53.865773    1981 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:32:53 no-preload-038781 kubelet[1981]: I1019 17:32:53.951798    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db59a39e-b75f-4f1b-abb0-099bf1c7526e-config-volume\") pod \"coredns-66bc5c9577-6k8tn\" (UID: \"db59a39e-b75f-4f1b-abb0-099bf1c7526e\") " pod="kube-system/coredns-66bc5c9577-6k8tn"
	Oct 19 17:32:53 no-preload-038781 kubelet[1981]: I1019 17:32:53.951849    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstbw\" (UniqueName: \"kubernetes.io/projected/db59a39e-b75f-4f1b-abb0-099bf1c7526e-kube-api-access-cstbw\") pod \"coredns-66bc5c9577-6k8tn\" (UID: \"db59a39e-b75f-4f1b-abb0-099bf1c7526e\") " pod="kube-system/coredns-66bc5c9577-6k8tn"
	Oct 19 17:32:53 no-preload-038781 kubelet[1981]: I1019 17:32:53.951880    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/356dc8ab-93c3-4567-8229-41c2153acabc-tmp\") pod \"storage-provisioner\" (UID: \"356dc8ab-93c3-4567-8229-41c2153acabc\") " pod="kube-system/storage-provisioner"
	Oct 19 17:32:53 no-preload-038781 kubelet[1981]: I1019 17:32:53.951899    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq6b\" (UniqueName: \"kubernetes.io/projected/356dc8ab-93c3-4567-8229-41c2153acabc-kube-api-access-wlq6b\") pod \"storage-provisioner\" (UID: \"356dc8ab-93c3-4567-8229-41c2153acabc\") " pod="kube-system/storage-provisioner"
	Oct 19 17:32:54 no-preload-038781 kubelet[1981]: W1019 17:32:54.239258    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/crio-73cfc4e7ef2d2f3af55449a6a801fc29d42ef869aeb8ad972af51a4a5ba92a58 WatchSource:0}: Error finding container 73cfc4e7ef2d2f3af55449a6a801fc29d42ef869aeb8ad972af51a4a5ba92a58: Status 404 returned error can't find the container with id 73cfc4e7ef2d2f3af55449a6a801fc29d42ef869aeb8ad972af51a4a5ba92a58
	Oct 19 17:32:54 no-preload-038781 kubelet[1981]: W1019 17:32:54.263176    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/crio-c6dae83f9b433ed21ad2edb1005b261afcd9c0db4b49d0d8586529f72eaf9e33 WatchSource:0}: Error finding container c6dae83f9b433ed21ad2edb1005b261afcd9c0db4b49d0d8586529f72eaf9e33: Status 404 returned error can't find the container with id c6dae83f9b433ed21ad2edb1005b261afcd9c0db4b49d0d8586529f72eaf9e33
	Oct 19 17:32:54 no-preload-038781 kubelet[1981]: I1019 17:32:54.678120    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.678098623 podStartE2EDuration="14.678098623s" podCreationTimestamp="2025-10-19 17:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:32:54.663830251 +0000 UTC m=+22.320978112" watchObservedRunningTime="2025-10-19 17:32:54.678098623 +0000 UTC m=+22.335246475"
	Oct 19 17:32:55 no-preload-038781 kubelet[1981]: I1019 17:32:55.668772    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6k8tn" podStartSLOduration=17.668750573 podStartE2EDuration="17.668750573s" podCreationTimestamp="2025-10-19 17:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:32:54.678859415 +0000 UTC m=+22.336007283" watchObservedRunningTime="2025-10-19 17:32:55.668750573 +0000 UTC m=+23.325898417"
	Oct 19 17:32:57 no-preload-038781 kubelet[1981]: I1019 17:32:57.476638    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7zt\" (UniqueName: \"kubernetes.io/projected/e72c8cf5-0aa2-449f-9383-3dc04b70f634-kube-api-access-md7zt\") pod \"busybox\" (UID: \"e72c8cf5-0aa2-449f-9383-3dc04b70f634\") " pod="default/busybox"
	
	
	==> storage-provisioner [50b05889171a88f5b82431f08071c040c3e30b24943242737522da8af39223a7] <==
	I1019 17:32:54.334626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:32:54.351976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:32:54.352179       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:32:54.355347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:32:54.364183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:32:54.364485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:32:54.364553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3e86efa-396c-4e58-879b-5827a6d5b481", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-038781_1d1ed9af-5fee-49e7-bc7e-861ba180b6a3 became leader
	I1019 17:32:54.364909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-038781_1d1ed9af-5fee-49e7-bc7e-861ba180b6a3!
	W1019 17:32:54.367426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:32:54.385410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:32:54.465442       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-038781_1d1ed9af-5fee-49e7-bc7e-861ba180b6a3!
	W1019 17:32:56.387891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:32:56.393223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:32:58.395929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:32:58.400731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:00.404341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:00.413500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:02.416465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:02.421349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:04.425434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:04.430379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:06.433755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:06.440512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:08.443705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:33:08.451083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-038781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-125363 --alsologtostderr -v=1
E1019 17:33:18.055332    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-125363 --alsologtostderr -v=1: exit status 80 (1.69201921s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-125363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:33:18.084861  231557 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:33:18.085075  231557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:18.085108  231557 out.go:374] Setting ErrFile to fd 2...
	I1019 17:33:18.085129  231557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:18.085416  231557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:33:18.085708  231557 out.go:368] Setting JSON to false
	I1019 17:33:18.085762  231557 mustload.go:66] Loading cluster: old-k8s-version-125363
	I1019 17:33:18.086192  231557 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:33:18.086759  231557 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:33:18.104897  231557 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:33:18.105193  231557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:18.161732  231557 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-19 17:33:18.152267285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:18.162406  231557 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-125363 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:33:18.165729  231557 out.go:179] * Pausing node old-k8s-version-125363 ... 
	I1019 17:33:18.168736  231557 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:33:18.169093  231557 ssh_runner.go:195] Run: systemctl --version
	I1019 17:33:18.169141  231557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:33:18.186642  231557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:33:18.289356  231557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:33:18.302109  231557 pause.go:52] kubelet running: true
	I1019 17:33:18.302174  231557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:33:18.506596  231557 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:33:18.506690  231557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:33:18.577605  231557 cri.go:89] found id: "3f1c54529ea02b321c4155885fdf7f0ab373762c36dbd8b6947f0ec9445bdc3f"
	I1019 17:33:18.577633  231557 cri.go:89] found id: "ece679b27632a8e593d7fdf65a30b812a5e5883e49838353a369056eb0d077c4"
	I1019 17:33:18.577638  231557 cri.go:89] found id: "26fe11e3b4c99f777dd6ff13e00c2520375d45a54af8f47482b753935bdca6c4"
	I1019 17:33:18.577642  231557 cri.go:89] found id: "9ef8929ec3547c8d7ccefe3c6cab404d96aa55f957ba041fbdbb09381cb26b3f"
	I1019 17:33:18.577645  231557 cri.go:89] found id: "bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408"
	I1019 17:33:18.577653  231557 cri.go:89] found id: "3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce"
	I1019 17:33:18.577692  231557 cri.go:89] found id: "d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934"
	I1019 17:33:18.577704  231557 cri.go:89] found id: "1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d"
	I1019 17:33:18.577708  231557 cri.go:89] found id: "197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50"
	I1019 17:33:18.577723  231557 cri.go:89] found id: "9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	I1019 17:33:18.577731  231557 cri.go:89] found id: "01d7ad311ee27ef3a024b0e4479aea674714fcb757bf1a7c0706e86d8e1819bc"
	I1019 17:33:18.577734  231557 cri.go:89] found id: ""
	I1019 17:33:18.577807  231557 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:33:18.589073  231557 retry.go:31] will retry after 166.265013ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:18Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:33:18.756535  231557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:33:18.769818  231557 pause.go:52] kubelet running: false
	I1019 17:33:18.769908  231557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:33:18.942154  231557 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:33:18.942254  231557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:33:19.017338  231557 cri.go:89] found id: "3f1c54529ea02b321c4155885fdf7f0ab373762c36dbd8b6947f0ec9445bdc3f"
	I1019 17:33:19.017364  231557 cri.go:89] found id: "ece679b27632a8e593d7fdf65a30b812a5e5883e49838353a369056eb0d077c4"
	I1019 17:33:19.017369  231557 cri.go:89] found id: "26fe11e3b4c99f777dd6ff13e00c2520375d45a54af8f47482b753935bdca6c4"
	I1019 17:33:19.017373  231557 cri.go:89] found id: "9ef8929ec3547c8d7ccefe3c6cab404d96aa55f957ba041fbdbb09381cb26b3f"
	I1019 17:33:19.017376  231557 cri.go:89] found id: "bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408"
	I1019 17:33:19.017380  231557 cri.go:89] found id: "3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce"
	I1019 17:33:19.017384  231557 cri.go:89] found id: "d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934"
	I1019 17:33:19.017387  231557 cri.go:89] found id: "1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d"
	I1019 17:33:19.017391  231557 cri.go:89] found id: "197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50"
	I1019 17:33:19.017397  231557 cri.go:89] found id: "9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	I1019 17:33:19.017401  231557 cri.go:89] found id: "01d7ad311ee27ef3a024b0e4479aea674714fcb757bf1a7c0706e86d8e1819bc"
	I1019 17:33:19.017404  231557 cri.go:89] found id: ""
	I1019 17:33:19.017453  231557 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:33:19.028938  231557 retry.go:31] will retry after 398.079176ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:19Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:33:19.427505  231557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:33:19.440851  231557 pause.go:52] kubelet running: false
	I1019 17:33:19.440921  231557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:33:19.614211  231557 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:33:19.614365  231557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:33:19.688010  231557 cri.go:89] found id: "3f1c54529ea02b321c4155885fdf7f0ab373762c36dbd8b6947f0ec9445bdc3f"
	I1019 17:33:19.688030  231557 cri.go:89] found id: "ece679b27632a8e593d7fdf65a30b812a5e5883e49838353a369056eb0d077c4"
	I1019 17:33:19.688035  231557 cri.go:89] found id: "26fe11e3b4c99f777dd6ff13e00c2520375d45a54af8f47482b753935bdca6c4"
	I1019 17:33:19.688039  231557 cri.go:89] found id: "9ef8929ec3547c8d7ccefe3c6cab404d96aa55f957ba041fbdbb09381cb26b3f"
	I1019 17:33:19.688043  231557 cri.go:89] found id: "bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408"
	I1019 17:33:19.688046  231557 cri.go:89] found id: "3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce"
	I1019 17:33:19.688049  231557 cri.go:89] found id: "d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934"
	I1019 17:33:19.688057  231557 cri.go:89] found id: "1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d"
	I1019 17:33:19.688060  231557 cri.go:89] found id: "197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50"
	I1019 17:33:19.688067  231557 cri.go:89] found id: "9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	I1019 17:33:19.688070  231557 cri.go:89] found id: "01d7ad311ee27ef3a024b0e4479aea674714fcb757bf1a7c0706e86d8e1819bc"
	I1019 17:33:19.688073  231557 cri.go:89] found id: ""
	I1019 17:33:19.688165  231557 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:33:19.703198  231557 out.go:203] 
	W1019 17:33:19.706248  231557 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:33:19.706270  231557 out.go:285] * 
	* 
	W1019 17:33:19.711188  231557 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:33:19.713963  231557 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-125363 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-125363
helpers_test.go:243: (dbg) docker inspect old-k8s-version-125363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	        "Created": "2025-10-19T17:30:37.268621175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:32:06.121848116Z",
	            "FinishedAt": "2025-10-19T17:32:03.943644179Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hosts",
	        "LogPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18-json.log",
	        "Name": "/old-k8s-version-125363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-125363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-125363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	                "LowerDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-125363",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-125363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-125363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "75c7702cf2ecf7dbe9f89ecd1617ed8c066602b44445f0fc55fabed66d881fa4",
	            "SandboxKey": "/var/run/docker/netns/75c7702cf2ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-125363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f1:eb:dc:b6:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c605d5ace27fd5383c607c72991f6fd31798e2bf8285be119b02bf86a3e7e1c",
	                    "EndpointID": "872cbc80b1bb7591adc70973c2ab7a7dd0ed93632f5ee6528ea215a414ea3d84",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-125363",
	                        "7cebf5ae65ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363
E1019 17:33:20.086944    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363: exit status 2 (378.833834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25: (1.486982717s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-953581 sudo docker system info                                                                                                                                                                                                      │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:32:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:32:05.705396  227579 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:32:05.705954  227579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:32:05.705989  227579 out.go:374] Setting ErrFile to fd 2...
	I1019 17:32:05.706009  227579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:32:05.706312  227579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:32:05.706789  227579 out.go:368] Setting JSON to false
	I1019 17:32:05.707765  227579 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4474,"bootTime":1760890652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:32:05.707866  227579 start.go:143] virtualization:  
	I1019 17:32:05.711313  227579 out.go:179] * [old-k8s-version-125363] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:32:05.715503  227579 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:32:05.715576  227579 notify.go:221] Checking for updates...
	I1019 17:32:05.721725  227579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:32:05.724829  227579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:05.727744  227579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:32:05.730660  227579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:32:05.734484  227579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:32:05.738025  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:05.741654  227579 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:32:05.744751  227579 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:32:05.788129  227579 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:32:05.788296  227579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:32:05.885509  227579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 17:32:05.876344573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:32:05.885605  227579 docker.go:319] overlay module found
	I1019 17:32:05.888753  227579 out.go:179] * Using the docker driver based on existing profile
	I1019 17:32:05.891597  227579 start.go:309] selected driver: docker
	I1019 17:32:05.891616  227579 start.go:930] validating driver "docker" against &{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:05.891714  227579 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:32:05.892404  227579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:32:05.986565  227579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 17:32:05.977132066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:32:05.986920  227579 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:05.986967  227579 cni.go:84] Creating CNI manager for ""
	I1019 17:32:05.987017  227579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:05.987052  227579 start.go:353] cluster config:
	{Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:05.992003  227579 out.go:179] * Starting "old-k8s-version-125363" primary control-plane node in "old-k8s-version-125363" cluster
	I1019 17:32:05.995071  227579 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:32:06.007364  227579 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:32:06.010349  227579 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:32:06.010471  227579 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 17:32:06.010483  227579 cache.go:59] Caching tarball of preloaded images
	I1019 17:32:06.010663  227579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:32:06.011141  227579 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:32:06.011168  227579 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 17:32:06.011331  227579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:32:06.045794  227579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:32:06.045819  227579 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:32:06.045837  227579 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:32:06.045860  227579 start.go:360] acquireMachinesLock for old-k8s-version-125363: {Name:mkd08e65b205b510576dbfd42cd5fdbceaaa1817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:32:06.045929  227579 start.go:364] duration metric: took 48.247µs to acquireMachinesLock for "old-k8s-version-125363"
	I1019 17:32:06.045951  227579 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:32:06.045963  227579 fix.go:54] fixHost starting: 
	I1019 17:32:06.046242  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:06.075194  227579 fix.go:112] recreateIfNeeded on old-k8s-version-125363: state=Stopped err=<nil>
	W1019 17:32:06.075221  227579 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:32:05.411634  225032 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.505485853s)
	I1019 17:32:05.411655  225032 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.50566422s)
	I1019 17:32:05.411723  225032 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:05.411660  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1019 17:32:05.411801  225032 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:32:05.411827  225032 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1019 17:32:09.114937  225032 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.703085356s)
	I1019 17:32:09.114963  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1019 17:32:09.114990  225032 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.703250889s)
	I1019 17:32:09.115015  225032 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1019 17:32:09.115110  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.119735  225032 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1019 17:32:09.119779  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1019 17:32:09.200802  225032 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.200953  225032 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1019 17:32:09.822243  225032 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 17:32:09.822280  225032 cache_images.go:125] Successfully loaded all cached images
	I1019 17:32:09.822287  225032 cache_images.go:94] duration metric: took 13.329116697s to LoadCachedImages
	I1019 17:32:09.822297  225032 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:32:09.822399  225032 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-038781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:32:09.822489  225032 ssh_runner.go:195] Run: crio config
	I1019 17:32:06.078291  227579 out.go:252] * Restarting existing docker container for "old-k8s-version-125363" ...
	I1019 17:32:06.078370  227579 cli_runner.go:164] Run: docker start old-k8s-version-125363
	I1019 17:32:06.394573  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:06.416143  227579 kic.go:430] container "old-k8s-version-125363" state is running.
	I1019 17:32:06.417276  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:06.441393  227579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/config.json ...
	I1019 17:32:06.441700  227579 machine.go:94] provisionDockerMachine start ...
	I1019 17:32:06.441784  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:06.474828  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:06.475257  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:06.475268  227579 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:32:06.476051  227579 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:32:09.646113  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:32:09.646142  227579 ubuntu.go:182] provisioning hostname "old-k8s-version-125363"
	I1019 17:32:09.646212  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:09.666059  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:09.666358  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:09.666369  227579 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-125363 && echo "old-k8s-version-125363" | sudo tee /etc/hostname
	I1019 17:32:09.836621  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-125363
	
	I1019 17:32:09.836694  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:09.859859  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:09.860169  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:09.860187  227579 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-125363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-125363/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-125363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:32:10.041777  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:32:10.041799  227579 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:32:10.041817  227579 ubuntu.go:190] setting up certificates
	I1019 17:32:10.041827  227579 provision.go:84] configureAuth start
	I1019 17:32:10.041888  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:10.090664  227579 provision.go:143] copyHostCerts
	I1019 17:32:10.090729  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:32:10.090747  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:32:10.090829  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:32:10.090947  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:32:10.090952  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:32:10.090985  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:32:10.091044  227579 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:32:10.091049  227579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:32:10.091074  227579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:32:10.091130  227579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-125363 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-125363]
	I1019 17:32:09.891797  225032 cni.go:84] Creating CNI manager for ""
	I1019 17:32:09.891822  225032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:09.891840  225032 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:32:09.891868  225032 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-038781 NodeName:no-preload-038781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:32:09.891995  225032 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-038781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:32:09.892066  225032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:32:09.901826  225032 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1019 17:32:09.901899  225032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1019 17:32:09.912567  225032 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1019 17:32:09.912654  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1019 17:32:09.913862  225032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1019 17:32:09.913944  225032 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1019 17:32:09.919861  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1019 17:32:09.919895  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1019 17:32:10.945507  225032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:10.966028  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1019 17:32:10.974124  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1019 17:32:10.974165  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1019 17:32:11.320989  225032 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1019 17:32:11.327912  225032 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1019 17:32:11.332071  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1019 17:32:11.799676  225032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:32:11.810632  225032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:32:11.828521  225032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:32:11.845458  225032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 17:32:11.862567  225032 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:32:11.867493  225032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:11.879803  225032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:12.009314  225032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:12.030388  225032 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781 for IP: 192.168.76.2
	I1019 17:32:12.030459  225032 certs.go:195] generating shared ca certs ...
	I1019 17:32:12.030500  225032 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:12.030750  225032 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:32:12.030837  225032 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:32:12.030875  225032 certs.go:257] generating profile certs ...
	I1019 17:32:12.030978  225032 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key
	I1019 17:32:12.031034  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt with IP's: []
	I1019 17:32:13.050657  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt ...
	I1019 17:32:13.050730  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.crt: {Name:mk3f290cc4c355f70dccace558882b1a84846e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.050950  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key ...
	I1019 17:32:13.050984  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key: {Name:mk19b07416c5061089c7b6549b161a2b3570a3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.051124  225032 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d
	I1019 17:32:13.051159  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:32:13.331976  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d ...
	I1019 17:32:13.332009  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d: {Name:mkc0def6fd5a2512785b39750f1e37f96839be83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.332179  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d ...
	I1019 17:32:13.332195  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d: {Name:mk029fd686d0344ce1845ee5718bc0ff0b5ae626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:13.332269  225032 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt.559c1e8d -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt
	I1019 17:32:13.332351  225032 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key
	I1019 17:32:13.332414  225032 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key
	I1019 17:32:13.332433  225032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt with IP's: []
	I1019 17:32:14.130589  225032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt ...
	I1019 17:32:14.130620  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt: {Name:mk88d84623ca49934579a6025399288bc768dc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:14.130802  225032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key ...
	I1019 17:32:14.130816  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key: {Name:mk8bbb9dbb3136c32eb9a12263c10da7dd73b55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:14.131003  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:32:14.131051  225032 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:32:14.131064  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:32:14.131090  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:32:14.131118  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:32:14.131144  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:32:14.131191  225032 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:14.131742  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:32:14.152114  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:32:14.173704  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:32:14.194107  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:32:14.214680  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:32:14.233722  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:32:14.256395  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:32:14.274204  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:32:14.300626  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:32:14.320472  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:32:14.339977  225032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:32:14.359398  225032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:32:14.372973  225032 ssh_runner.go:195] Run: openssl version
	I1019 17:32:14.379626  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:32:14.388363  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.392892  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.392966  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:14.434167  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:32:14.442995  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:32:14.451673  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.456124  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.456186  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:32:14.497639  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:32:14.506884  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:32:14.517169  225032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.521515  225032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.521631  225032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:32:14.562849  225032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:32:14.571447  225032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:32:14.575543  225032 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:32:14.575594  225032 kubeadm.go:401] StartCluster: {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:14.575665  225032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:32:14.575735  225032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:32:14.602660  225032 cri.go:89] found id: ""
	I1019 17:32:14.602771  225032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:32:14.611092  225032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:32:14.619281  225032 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:32:14.619380  225032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:32:14.627525  225032 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:32:14.627548  225032 kubeadm.go:158] found existing configuration files:
	
	I1019 17:32:14.627601  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:32:14.636052  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:32:14.636112  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:32:14.644413  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:32:14.652592  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:32:14.652682  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:32:14.660919  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:32:14.669388  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:32:14.669483  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:32:14.677898  225032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:32:14.686302  225032 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:32:14.686469  225032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:32:14.694386  225032 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:32:14.734240  225032 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:32:14.734498  225032 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:32:14.756560  225032 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:32:14.756642  225032 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:32:14.756680  225032 kubeadm.go:319] OS: Linux
	I1019 17:32:14.756734  225032 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:32:14.756815  225032 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:32:14.756895  225032 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:32:14.756964  225032 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:32:14.757035  225032 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:32:14.757113  225032 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:32:14.757184  225032 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:32:14.757259  225032 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:32:14.757324  225032 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:32:14.827223  225032 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:32:14.827392  225032 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:32:14.827494  225032 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:32:14.842943  225032 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:32:11.507504  227579 provision.go:177] copyRemoteCerts
	I1019 17:32:11.507575  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:32:11.507613  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:11.539031  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:11.678091  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1019 17:32:11.721579  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:32:11.794758  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:32:11.815935  227579 provision.go:87] duration metric: took 1.774086155s to configureAuth
	I1019 17:32:11.815960  227579 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:32:11.816154  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:11.816276  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:11.834739  227579 main.go:143] libmachine: Using SSH client type: native
	I1019 17:32:11.835038  227579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1019 17:32:11.835059  227579 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:32:12.246234  227579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:32:12.246259  227579 machine.go:97] duration metric: took 5.804541179s to provisionDockerMachine
	I1019 17:32:12.246270  227579 start.go:293] postStartSetup for "old-k8s-version-125363" (driver="docker")
	I1019 17:32:12.246281  227579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:32:12.246352  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:32:12.246393  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.269664  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.380077  227579 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:32:12.385691  227579 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:32:12.385729  227579 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:32:12.385741  227579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:32:12.385795  227579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:32:12.385880  227579 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:32:12.386010  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:32:12.394576  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:12.413964  227579 start.go:296] duration metric: took 167.679158ms for postStartSetup
	I1019 17:32:12.414055  227579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:32:12.414102  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.433575  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.536154  227579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:32:12.541156  227579 fix.go:56] duration metric: took 6.495186439s for fixHost
	I1019 17:32:12.541184  227579 start.go:83] releasing machines lock for "old-k8s-version-125363", held for 6.495243162s
	I1019 17:32:12.541253  227579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-125363
	I1019 17:32:12.559881  227579 ssh_runner.go:195] Run: cat /version.json
	I1019 17:32:12.559931  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.559942  227579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:32:12.560004  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:12.585653  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.600379  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:12.698771  227579 ssh_runner.go:195] Run: systemctl --version
	I1019 17:32:12.797926  227579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:32:12.867719  227579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:32:12.872380  227579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:32:12.872450  227579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:32:12.880895  227579 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:32:12.880923  227579 start.go:496] detecting cgroup driver to use...
	I1019 17:32:12.880954  227579 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:32:12.881006  227579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:32:12.896945  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:32:12.911001  227579 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:32:12.911066  227579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:32:12.927416  227579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:32:12.941298  227579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:32:13.098170  227579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:32:13.243767  227579 docker.go:234] disabling docker service ...
	I1019 17:32:13.243845  227579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:32:13.259097  227579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:32:13.272077  227579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:32:13.413509  227579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:32:13.565441  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:32:13.582329  227579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:32:13.611177  227579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 17:32:13.611248  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.637907  227579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:32:13.637981  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.657565  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.668095  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.677615  227579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:32:13.688720  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.701425  227579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.712704  227579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:32:13.724503  227579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:32:13.736628  227579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:32:13.744704  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:13.881213  227579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:32:15.191361  227579 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.310104867s)
	I1019 17:32:15.191395  227579 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:32:15.191453  227579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:32:15.195814  227579 start.go:564] Will wait 60s for crictl version
	I1019 17:32:15.195875  227579 ssh_runner.go:195] Run: which crictl
	I1019 17:32:15.200298  227579 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:32:15.229577  227579 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:32:15.229658  227579 ssh_runner.go:195] Run: crio --version
	I1019 17:32:15.262606  227579 ssh_runner.go:195] Run: crio --version
	I1019 17:32:15.300264  227579 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 17:32:15.303092  227579 cli_runner.go:164] Run: docker network inspect old-k8s-version-125363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:32:15.320188  227579 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:32:15.324689  227579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:15.335370  227579 kubeadm.go:884] updating cluster {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:32:15.335484  227579 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 17:32:15.335544  227579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:32:15.385033  227579 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:32:15.385055  227579 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:32:15.385110  227579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:32:15.419953  227579 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:32:15.419974  227579 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:32:15.419982  227579 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 17:32:15.420138  227579 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-125363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:32:15.420217  227579 ssh_runner.go:195] Run: crio config
	I1019 17:32:15.511601  227579 cni.go:84] Creating CNI manager for ""
	I1019 17:32:15.511625  227579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:15.511653  227579 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:32:15.511679  227579 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-125363 NodeName:old-k8s-version-125363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:32:15.511812  227579 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-125363"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:32:15.511882  227579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 17:32:15.521453  227579 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:32:15.521525  227579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:32:15.529962  227579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1019 17:32:15.544850  227579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:32:15.560107  227579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1019 17:32:15.575424  227579 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:32:15.579798  227579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:32:15.590473  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:15.730328  227579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:15.764200  227579 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363 for IP: 192.168.85.2
	I1019 17:32:15.764270  227579 certs.go:195] generating shared ca certs ...
	I1019 17:32:15.764300  227579 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:15.764480  227579 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:32:15.764572  227579 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:32:15.764612  227579 certs.go:257] generating profile certs ...
	I1019 17:32:15.764740  227579 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.key
	I1019 17:32:15.764899  227579 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key.02194795
	I1019 17:32:15.764979  227579 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key
	I1019 17:32:15.765132  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:32:15.765197  227579 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:32:15.765222  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:32:15.765284  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:32:15.765346  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:32:15.765407  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:32:15.765493  227579 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:32:15.766911  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:32:15.798438  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:32:15.827844  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:32:15.866115  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:32:15.899727  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1019 17:32:15.921573  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:32:16.016055  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:32:16.076444  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:32:16.124821  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:32:16.145005  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:32:16.166635  227579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:32:16.186590  227579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:32:16.202059  227579 ssh_runner.go:195] Run: openssl version
	I1019 17:32:16.208586  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:32:16.218599  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.222736  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.222834  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:32:16.268070  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:32:16.281028  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:32:16.290340  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.294411  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.294529  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:32:16.337453  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:32:16.346217  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:32:16.355225  227579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.359457  227579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.359564  227579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:32:16.401315  227579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:32:16.422459  227579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:32:16.432235  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:32:16.539694  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:32:16.649782  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:32:16.738698  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:32:16.835512  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:32:16.936163  227579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:32:17.031513  227579 kubeadm.go:401] StartCluster: {Name:old-k8s-version-125363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-125363 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:32:17.031657  227579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:32:17.031746  227579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:32:17.132212  227579 cri.go:89] found id: "3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce"
	I1019 17:32:17.132282  227579 cri.go:89] found id: "d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934"
	I1019 17:32:17.132301  227579 cri.go:89] found id: "1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d"
	I1019 17:32:17.132321  227579 cri.go:89] found id: "197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50"
	I1019 17:32:17.132340  227579 cri.go:89] found id: ""
	I1019 17:32:17.132419  227579 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:32:17.155134  227579 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:32:17Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:32:17.155260  227579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:32:17.176031  227579 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:32:17.176100  227579 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:32:17.176167  227579 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:32:17.192737  227579 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:32:17.193257  227579 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-125363" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:17.193424  227579 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-125363" cluster setting kubeconfig missing "old-k8s-version-125363" context setting]
	I1019 17:32:17.193776  227579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.195492  227579 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:32:17.227146  227579 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:32:17.227226  227579 kubeadm.go:602] duration metric: took 51.106476ms to restartPrimaryControlPlane
	I1019 17:32:17.227250  227579 kubeadm.go:403] duration metric: took 195.74713ms to StartCluster
	I1019 17:32:17.227290  227579 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.227386  227579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:17.228112  227579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:17.228399  227579 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:32:17.228832  227579 config.go:182] Loaded profile config "old-k8s-version-125363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:32:17.228795  227579 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:32:17.228955  227579 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-125363"
	I1019 17:32:17.228990  227579 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-125363"
	W1019 17:32:17.229042  227579 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:32:17.229077  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.229018  227579 addons.go:70] Setting dashboard=true in profile "old-k8s-version-125363"
	I1019 17:32:17.229295  227579 addons.go:239] Setting addon dashboard=true in "old-k8s-version-125363"
	W1019 17:32:17.229303  227579 addons.go:248] addon dashboard should already be in state true
	I1019 17:32:17.229320  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.230030  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.229024  227579 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-125363"
	I1019 17:32:17.230572  227579 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-125363"
	I1019 17:32:17.230873  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.231218  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.232221  227579 out.go:179] * Verifying Kubernetes components...
	I1019 17:32:17.236703  227579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:17.283274  227579 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-125363"
	W1019 17:32:17.283296  227579 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:32:17.283323  227579 host.go:66] Checking if "old-k8s-version-125363" exists ...
	I1019 17:32:17.284131  227579 cli_runner.go:164] Run: docker container inspect old-k8s-version-125363 --format={{.State.Status}}
	I1019 17:32:17.290219  227579 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:32:17.293148  227579 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:17.297555  227579 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:17.297579  227579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:32:17.297646  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.297833  227579 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:32:14.878582  225032 out.go:252]   - Generating certificates and keys ...
	I1019 17:32:14.878751  225032 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:32:14.878849  225032 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:32:15.015803  225032 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:32:15.263209  225032 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:32:15.780959  225032 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:32:15.912356  225032 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:32:16.212911  225032 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:32:16.213182  225032 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-038781] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:32:16.296754  225032 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:32:16.297311  225032 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-038781] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:32:17.265690  225032 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:32:17.767259  225032 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:32:18.199030  225032 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:32:18.199565  225032 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:32:19.088169  225032 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:32:19.232788  225032 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:32:19.634904  225032 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:32:17.300682  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:32:17.300709  227579 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:32:17.300776  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.331351  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.356782  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.359237  227579 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:17.359257  227579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:32:17.359316  227579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-125363
	I1019 17:32:17.396929  227579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/old-k8s-version-125363/id_rsa Username:docker}
	I1019 17:32:17.656472  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:17.721224  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:32:17.721287  227579 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:32:17.732223  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:17.771811  227579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:17.780425  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:32:17.780450  227579 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:32:17.887241  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:32:17.887317  227579 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:32:18.022948  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:32:18.022968  227579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:32:18.222384  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:32:18.222405  227579 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:32:18.275707  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:32:18.275733  227579 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:32:18.322815  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:32:18.322840  227579 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:32:18.356587  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:32:18.356616  227579 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:32:18.390219  227579 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:32:18.390243  227579 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:32:18.424352  227579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:32:20.700286  225032 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:32:21.385008  225032 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:32:21.385109  225032 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:32:21.389103  225032 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:32:21.392804  225032 out.go:252]   - Booting up control plane ...
	I1019 17:32:21.392921  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:32:21.393003  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:32:21.393087  225032 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:32:21.414512  225032 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:32:21.414665  225032 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:32:21.430807  225032 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:32:21.431137  225032 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:32:21.431348  225032 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:32:21.649220  225032 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:32:21.649356  225032 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:32:23.649588  225032 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.0014623s
	I1019 17:32:23.653182  225032 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:32:23.653285  225032 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 17:32:23.653594  225032 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:32:23.653684  225032 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:32:26.842339  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.110040041s)
	I1019 17:32:26.842745  227579 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.070874324s)
	I1019 17:32:26.842785  227579 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:32:26.843867  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.187324753s)
	I1019 17:32:26.913207  227579 node_ready.go:49] node "old-k8s-version-125363" is "Ready"
	I1019 17:32:26.913236  227579 node_ready.go:38] duration metric: took 70.432316ms for node "old-k8s-version-125363" to be "Ready" ...
	I1019 17:32:26.913250  227579 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:32:26.913335  227579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:32:28.088459  227579 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.664061923s)
	I1019 17:32:28.088639  227579 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.175286819s)
	I1019 17:32:28.088660  227579 api_server.go:72] duration metric: took 10.860200373s to wait for apiserver process to appear ...
	I1019 17:32:28.088667  227579 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:32:28.088690  227579 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:32:28.091692  227579 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-125363 addons enable metrics-server
	
	I1019 17:32:28.094875  227579 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:32:28.488681  225032 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.83495401s
	I1019 17:32:28.098010  227579 addons.go:515] duration metric: took 10.869175813s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 17:32:28.107591  227579 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:32:28.109290  227579 api_server.go:141] control plane version: v1.28.0
	I1019 17:32:28.109321  227579 api_server.go:131] duration metric: took 20.643587ms to wait for apiserver health ...
	I1019 17:32:28.109330  227579 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:32:28.114837  227579 system_pods.go:59] 8 kube-system pods found
	I1019 17:32:28.114879  227579 system_pods.go:61] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:28.114888  227579 system_pods.go:61] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:32:28.114895  227579 system_pods.go:61] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:32:28.114902  227579 system_pods.go:61] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:32:28.114909  227579 system_pods.go:61] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:32:28.114919  227579 system_pods.go:61] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:32:28.114928  227579 system_pods.go:61] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:32:28.114938  227579 system_pods.go:61] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Running
	I1019 17:32:28.114948  227579 system_pods.go:74] duration metric: took 5.608477ms to wait for pod list to return data ...
	I1019 17:32:28.114962  227579 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:32:28.118920  227579 default_sa.go:45] found service account: "default"
	I1019 17:32:28.118949  227579 default_sa.go:55] duration metric: took 3.980159ms for default service account to be created ...
	I1019 17:32:28.118968  227579 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:32:28.127294  227579 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:28.127330  227579 system_pods.go:89] "coredns-5dd5756b68-28psj" [f627e140-a201-479b-9d5e-a9f9844ed7d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:28.127342  227579 system_pods.go:89] "etcd-old-k8s-version-125363" [c51bc899-b94e-4fa5-96de-13f0cf615b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:32:28.127347  227579 system_pods.go:89] "kindnet-sgp8p" [0c027cd5-cea6-4170-860f-470cba905d64] Running
	I1019 17:32:28.127362  227579 system_pods.go:89] "kube-apiserver-old-k8s-version-125363" [eb1612dd-b2bc-46c2-afea-7d68c9f79168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:32:28.127374  227579 system_pods.go:89] "kube-controller-manager-old-k8s-version-125363" [e7e0e83a-269f-4e35-925c-81a5138a1eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:32:28.127384  227579 system_pods.go:89] "kube-proxy-zjv4r" [f145e324-d5e7-4643-a624-fc7b3420f6c6] Running
	I1019 17:32:28.127391  227579 system_pods.go:89] "kube-scheduler-old-k8s-version-125363" [5f09177d-cfc7-442b-a2c4-f4fb27344a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:32:28.127396  227579 system_pods.go:89] "storage-provisioner" [03c7a789-0ea1-4525-b93a-c70e9cbff9df] Running
	I1019 17:32:28.127409  227579 system_pods.go:126] duration metric: took 8.4346ms to wait for k8s-apps to be running ...
	I1019 17:32:28.127418  227579 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:32:28.127487  227579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:28.160542  227579 system_svc.go:56] duration metric: took 33.104025ms WaitForService to wait for kubelet
	I1019 17:32:28.160577  227579 kubeadm.go:587] duration metric: took 10.932111136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:28.160597  227579 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:32:28.166958  227579 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:32:28.166994  227579 node_conditions.go:123] node cpu capacity is 2
	I1019 17:32:28.167016  227579 node_conditions.go:105] duration metric: took 6.413619ms to run NodePressure ...
	I1019 17:32:28.167030  227579 start.go:242] waiting for startup goroutines ...
	I1019 17:32:28.167037  227579 start.go:247] waiting for cluster config update ...
	I1019 17:32:28.167052  227579 start.go:256] writing updated cluster config ...
	I1019 17:32:28.167431  227579 ssh_runner.go:195] Run: rm -f paused
	I1019 17:32:28.177206  227579 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:28.182304  227579 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:32:30.190015  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:30.033903  225032 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.380634634s
	I1019 17:32:31.655158  225032 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.001875799s
	I1019 17:32:31.674820  225032 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:32:31.699034  225032 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:32:31.717615  225032 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:32:31.717838  225032 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-038781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:32:31.731499  225032 kubeadm.go:319] [bootstrap-token] Using token: 69inx9.8tqqthy2gltoq5cz
	I1019 17:32:31.734660  225032 out.go:252]   - Configuring RBAC rules ...
	I1019 17:32:31.734790  225032 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:32:31.739330  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:32:31.748875  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:32:31.754296  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:32:31.760722  225032 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:32:31.765404  225032 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:32:32.064131  225032 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:32:32.510182  225032 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:32:33.063962  225032 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:32:33.065317  225032 kubeadm.go:319] 
	I1019 17:32:33.065399  225032 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:32:33.065410  225032 kubeadm.go:319] 
	I1019 17:32:33.065492  225032 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:32:33.065500  225032 kubeadm.go:319] 
	I1019 17:32:33.065526  225032 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:32:33.065593  225032 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:32:33.065652  225032 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:32:33.065661  225032 kubeadm.go:319] 
	I1019 17:32:33.065725  225032 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:32:33.065735  225032 kubeadm.go:319] 
	I1019 17:32:33.065790  225032 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:32:33.065798  225032 kubeadm.go:319] 
	I1019 17:32:33.065852  225032 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:32:33.065943  225032 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:32:33.066017  225032 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:32:33.066026  225032 kubeadm.go:319] 
	I1019 17:32:33.066119  225032 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:32:33.066204  225032 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:32:33.066213  225032 kubeadm.go:319] 
	I1019 17:32:33.066300  225032 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 69inx9.8tqqthy2gltoq5cz \
	I1019 17:32:33.066414  225032 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:32:33.066439  225032 kubeadm.go:319] 	--control-plane 
	I1019 17:32:33.066447  225032 kubeadm.go:319] 
	I1019 17:32:33.066563  225032 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:32:33.066579  225032 kubeadm.go:319] 
	I1019 17:32:33.066675  225032 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 69inx9.8tqqthy2gltoq5cz \
	I1019 17:32:33.066786  225032 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:32:33.071106  225032 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:32:33.071352  225032 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:32:33.071467  225032 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:32:33.071488  225032 cni.go:84] Creating CNI manager for ""
	I1019 17:32:33.071496  225032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:32:33.074624  225032 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:32:33.077685  225032 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:32:33.082593  225032 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:32:33.082624  225032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:32:33.099083  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:32:33.433160  225032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:32:33.433284  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:33.433348  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-038781 minikube.k8s.io/updated_at=2025_10_19T17_32_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=no-preload-038781 minikube.k8s.io/primary=true
	I1019 17:32:33.580866  225032 ops.go:34] apiserver oom_adj: -16
	I1019 17:32:33.581044  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:34.081250  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:34.581410  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 17:32:32.688570  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:35.188993  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:35.081691  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:35.581084  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:36.081306  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:36.581546  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:37.081164  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:37.581412  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:38.081451  225032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:32:38.224307  225032 kubeadm.go:1114] duration metric: took 4.791066656s to wait for elevateKubeSystemPrivileges
	I1019 17:32:38.224333  225032 kubeadm.go:403] duration metric: took 23.648743694s to StartCluster
	I1019 17:32:38.224350  225032 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:38.224417  225032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:32:38.225374  225032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:32:38.225594  225032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:32:38.225748  225032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:32:38.226006  225032 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:32:38.225975  225032 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:32:38.226063  225032 addons.go:70] Setting storage-provisioner=true in profile "no-preload-038781"
	I1019 17:32:38.226071  225032 addons.go:70] Setting default-storageclass=true in profile "no-preload-038781"
	I1019 17:32:38.226082  225032 addons.go:239] Setting addon storage-provisioner=true in "no-preload-038781"
	I1019 17:32:38.226086  225032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-038781"
	I1019 17:32:38.226106  225032 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:32:38.226405  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.226667  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.229683  225032 out.go:179] * Verifying Kubernetes components...
	I1019 17:32:38.232671  225032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:32:38.269323  225032 addons.go:239] Setting addon default-storageclass=true in "no-preload-038781"
	I1019 17:32:38.269363  225032 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:32:38.269769  225032 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:32:38.271613  225032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:32:38.275181  225032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:38.275213  225032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:32:38.275281  225032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:32:38.310603  225032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:38.310641  225032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:32:38.310704  225032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:32:38.344728  225032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:32:38.363633  225032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:32:38.600115  225032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:32:38.600215  225032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:32:38.661855  225032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:32:38.733609  225032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:32:39.681837  225032 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.081588956s)
	I1019 17:32:39.682865  225032 node_ready.go:35] waiting up to 6m0s for node "no-preload-038781" to be "Ready" ...
	I1019 17:32:39.683198  225032 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.083052749s)
	I1019 17:32:39.683910  225032 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 17:32:39.683346  225032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021464403s)
	I1019 17:32:40.191104  225032 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-038781" context rescaled to 1 replicas
	I1019 17:32:40.224208  225032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490562172s)
	I1019 17:32:40.235074  225032 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1019 17:32:37.697012  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:40.191595  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:40.238286  225032 addons.go:515] duration metric: took 2.012295734s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1019 17:32:41.691278  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:44.186302  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:42.693998  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:45.191447  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:46.687641  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:49.185636  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:47.690052  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:49.693160  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:51.185699  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	W1019 17:32:53.188322  225032 node_ready.go:57] node "no-preload-038781" has "Ready":"False" status (will retry)
	I1019 17:32:54.186126  225032 node_ready.go:49] node "no-preload-038781" is "Ready"
	I1019 17:32:54.186167  225032 node_ready.go:38] duration metric: took 14.50324163s for node "no-preload-038781" to be "Ready" ...
	I1019 17:32:54.186181  225032 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:32:54.186261  225032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:32:54.212132  225032 api_server.go:72] duration metric: took 15.986507353s to wait for apiserver process to appear ...
	I1019 17:32:54.212203  225032 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:32:54.212236  225032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:32:54.221237  225032 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:32:54.222515  225032 api_server.go:141] control plane version: v1.34.1
	I1019 17:32:54.222583  225032 api_server.go:131] duration metric: took 10.36033ms to wait for apiserver health ...
	I1019 17:32:54.222592  225032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:32:54.227900  225032 system_pods.go:59] 8 kube-system pods found
	I1019 17:32:54.227941  225032 system_pods.go:61] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.227949  225032 system_pods.go:61] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.227956  225032 system_pods.go:61] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.227961  225032 system_pods.go:61] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.227969  225032 system_pods.go:61] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.227973  225032 system_pods.go:61] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.227978  225032 system_pods.go:61] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.227985  225032 system_pods.go:61] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.227997  225032 system_pods.go:74] duration metric: took 5.398708ms to wait for pod list to return data ...
	I1019 17:32:54.228009  225032 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:32:54.231472  225032 default_sa.go:45] found service account: "default"
	I1019 17:32:54.231500  225032 default_sa.go:55] duration metric: took 3.483207ms for default service account to be created ...
	I1019 17:32:54.231511  225032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:32:54.234356  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.234392  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.234401  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.234408  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.234412  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.234417  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.234420  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.234425  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.234433  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.234453  225032 retry.go:31] will retry after 216.070278ms: missing components: kube-dns
	I1019 17:32:54.454950  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.454987  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.454994  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.455000  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.455005  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.455010  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.455014  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.455018  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.455026  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:32:54.455052  225032 retry.go:31] will retry after 272.670908ms: missing components: kube-dns
	I1019 17:32:54.732924  225032 system_pods.go:86] 8 kube-system pods found
	I1019 17:32:54.732971  225032 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:32:54.733000  225032 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running
	I1019 17:32:54.733010  225032 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:32:54.733015  225032 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running
	I1019 17:32:54.733021  225032 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running
	I1019 17:32:54.733033  225032 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:32:54.733037  225032 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running
	I1019 17:32:54.733041  225032 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:32:54.733050  225032 system_pods.go:126] duration metric: took 501.532253ms to wait for k8s-apps to be running ...
	I1019 17:32:54.733065  225032 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:32:54.733127  225032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:32:54.753410  225032 system_svc.go:56] duration metric: took 20.334398ms WaitForService to wait for kubelet
	I1019 17:32:54.753436  225032 kubeadm.go:587] duration metric: took 16.527818097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:32:54.753456  225032 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:32:54.758610  225032 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:32:54.758644  225032 node_conditions.go:123] node cpu capacity is 2
	I1019 17:32:54.758656  225032 node_conditions.go:105] duration metric: took 5.194389ms to run NodePressure ...
	I1019 17:32:54.758668  225032 start.go:242] waiting for startup goroutines ...
	I1019 17:32:54.758676  225032 start.go:247] waiting for cluster config update ...
	I1019 17:32:54.758687  225032 start.go:256] writing updated cluster config ...
	I1019 17:32:54.758983  225032 ssh_runner.go:195] Run: rm -f paused
	I1019 17:32:54.765558  225032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:54.769520  225032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:32:52.188515  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:54.189820  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:32:55.775637  225032 pod_ready.go:94] pod "coredns-66bc5c9577-6k8tn" is "Ready"
	I1019 17:32:55.775669  225032 pod_ready.go:86] duration metric: took 1.006121735s for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.779353  225032 pod_ready.go:83] waiting for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.783990  225032 pod_ready.go:94] pod "etcd-no-preload-038781" is "Ready"
	I1019 17:32:55.784011  225032 pod_ready.go:86] duration metric: took 4.632607ms for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.786609  225032 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.791192  225032 pod_ready.go:94] pod "kube-apiserver-no-preload-038781" is "Ready"
	I1019 17:32:55.791219  225032 pod_ready.go:86] duration metric: took 4.582892ms for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.793764  225032 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:55.973237  225032 pod_ready.go:94] pod "kube-controller-manager-no-preload-038781" is "Ready"
	I1019 17:32:55.973266  225032 pod_ready.go:86] duration metric: took 179.468167ms for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.173415  225032 pod_ready.go:83] waiting for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.573268  225032 pod_ready.go:94] pod "kube-proxy-2n5k9" is "Ready"
	I1019 17:32:56.573298  225032 pod_ready.go:86] duration metric: took 399.85069ms for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:56.773670  225032 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:57.173012  225032 pod_ready.go:94] pod "kube-scheduler-no-preload-038781" is "Ready"
	I1019 17:32:57.173080  225032 pod_ready.go:86] duration metric: took 399.379337ms for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:32:57.173101  225032 pod_ready.go:40] duration metric: took 2.407509578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:32:57.231384  225032 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:32:57.234760  225032 out.go:179] * Done! kubectl is now configured to use "no-preload-038781" cluster and "default" namespace by default
	W1019 17:32:56.688450  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:32:59.187911  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:33:01.188228  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	W1019 17:33:03.688022  227579 pod_ready.go:104] pod "coredns-5dd5756b68-28psj" is not "Ready", error: <nil>
	I1019 17:33:04.688587  227579 pod_ready.go:94] pod "coredns-5dd5756b68-28psj" is "Ready"
	I1019 17:33:04.688617  227579 pod_ready.go:86] duration metric: took 36.506285459s for pod "coredns-5dd5756b68-28psj" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.691745  227579 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.696870  227579 pod_ready.go:94] pod "etcd-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.696940  227579 pod_ready.go:86] duration metric: took 5.167573ms for pod "etcd-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.699998  227579 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.704998  227579 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.705026  227579 pod_ready.go:86] duration metric: took 4.999456ms for pod "kube-apiserver-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.708435  227579 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:04.886276  227579 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-125363" is "Ready"
	I1019 17:33:04.886303  227579 pod_ready.go:86] duration metric: took 177.843349ms for pod "kube-controller-manager-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.087697  227579 pod_ready.go:83] waiting for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.486874  227579 pod_ready.go:94] pod "kube-proxy-zjv4r" is "Ready"
	I1019 17:33:05.486902  227579 pod_ready.go:86] duration metric: took 399.171766ms for pod "kube-proxy-zjv4r" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:05.687248  227579 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:06.086988  227579 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-125363" is "Ready"
	I1019 17:33:06.087016  227579 pod_ready.go:86] duration metric: took 399.741727ms for pod "kube-scheduler-old-k8s-version-125363" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:06.087031  227579 pod_ready.go:40] duration metric: took 37.909788745s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:33:06.141064  227579 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1019 17:33:06.144605  227579 out.go:203] 
	W1019 17:33:06.147952  227579 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 17:33:06.151417  227579 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 17:33:06.154368  227579 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-125363" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.009137887Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=19abcfc1-5753-4672-b122-55f6cae63479 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.011110693Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d6eced28-903f-44e9-a52a-84a2d0e27f53 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.012812768Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=72f52115-a5e0-455d-b241-370af5cbf70a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.013197356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.021257308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.021835951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.039662515Z" level=info msg="Created container 9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=72f52115-a5e0-455d-b241-370af5cbf70a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.040842677Z" level=info msg="Starting container: 9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa" id=81d66acd-4fb2-4b99-b2f1-c2b0a0dc7dc2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.043743367Z" level=info msg="Started container" PID=1641 containerID=9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper id=81d66acd-4fb2-4b99-b2f1-c2b0a0dc7dc2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a9ecf9a7e2d8e2da6ca01bb18add06b0cc723f3f77de8be9beeebcc58d37b86
	Oct 19 17:33:05 old-k8s-version-125363 conmon[1639]: conmon 9ae1da96d5ae4b025341 <ninfo>: container 1641 exited with status 1
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.166433298Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.172932461Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.172982554Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.173008097Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.176956345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.176993687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.177018401Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180734365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180775884Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180802666Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.185190551Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.185228837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.340537648Z" level=info msg="Removing container: de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.351938649Z" level=info msg="Error loading conmon cgroup of container de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a: cgroup deleted" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.359973821Z" level=info msg="Removed container de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9ae1da96d5ae4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   6a9ecf9a7e2d8       dashboard-metrics-scraper-5f989dc9cf-7fdfh       kubernetes-dashboard
	3f1c54529ea02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   74e1765243327       storage-provisioner                              kube-system
	01d7ad311ee27       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago       Running             kubernetes-dashboard        0                   044e3995536a0       kubernetes-dashboard-8694d4445c-k2kx8            kubernetes-dashboard
	ece679b27632a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   e6008e5fc42ef       coredns-5dd5756b68-28psj                         kube-system
	c2f952e5b8bc3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   c97ffc7d9f275       busybox                                          default
	26fe11e3b4c99       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   5082b912d6b13       kube-proxy-zjv4r                                 kube-system
	9ef8929ec3547       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   617661470d9b1       kindnet-sgp8p                                    kube-system
	bd18b316c2a47       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   74e1765243327       storage-provisioner                              kube-system
	3c55bfaecaef6       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   41c0239bcf3d6       kube-apiserver-old-k8s-version-125363            kube-system
	d959f3fa938ff       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   24e7cc6438429       kube-controller-manager-old-k8s-version-125363   kube-system
	1fc58fbce400e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   87df55e98d3ea       etcd-old-k8s-version-125363                      kube-system
	197ecf5596167       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   364b172b41d9f       kube-scheduler-old-k8s-version-125363            kube-system
	
	
	==> coredns [ece679b27632a8e593d7fdf65a30b812a5e5883e49838353a369056eb0d077c4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42483 - 3076 "HINFO IN 1635565176832147072.3938495347753501213. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021859319s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-125363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-125363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-125363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_31_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:31:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-125363
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-125363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ae1e6c1c-619e-4a12-af9f-474dab50c58c
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-5dd5756b68-28psj                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-old-k8s-version-125363                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-sgp8p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-old-k8s-version-125363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-old-k8s-version-125363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-zjv4r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-old-k8s-version-125363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fdfh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-k2kx8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x8 over 2m23s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m13s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m13s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m13s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m1s                   node-controller  Node old-k8s-version-125363 event: Registered Node old-k8s-version-125363 in Controller
	  Normal  NodeReady                106s                   kubelet          Node old-k8s-version-125363 status is now: NodeReady
	  Normal  Starting                 66s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-125363 event: Registered Node old-k8s-version-125363 in Controller
	
	
	==> dmesg <==
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d] <==
	{"level":"info","ts":"2025-10-19T17:32:17.158705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:32:17.184758Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:32:17.182488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-19T17:32:17.184663Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:32:17.185146Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:32:17.185184Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:32:17.184708Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:32:17.185217Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:32:17.19924Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T17:32:17.199478Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:32:17.199577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:32:18.905635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.90594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.906222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.906433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.923323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:32:18.924635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:32:18.925009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:32:18.940279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:32:18.945493Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-125363 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:32:18.966693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:32:18.966805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:33:21 up  1:15,  0 user,  load average: 3.61, 3.89, 3.40
	Linux old-k8s-version-125363 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9ef8929ec3547c8d7ccefe3c6cab404d96aa55f957ba041fbdbb09381cb26b3f] <==
	I1019 17:32:24.820732       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:32:24.831175       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:32:24.842119       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:32:24.842142       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:32:24.842170       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:32:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:32:25.166991       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:32:25.167082       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:32:25.167134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:32:25.168211       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:32:55.167145       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:32:55.168327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:32:55.168327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:32:55.168520       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:32:56.667950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:32:56.667985       1 metrics.go:72] Registering metrics
	I1019 17:32:56.668073       1 controller.go:711] "Syncing nftables rules"
	I1019 17:33:05.166131       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:33:05.166186       1 main.go:301] handling current node
	I1019 17:33:15.170307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:33:15.170349       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce] <==
	I1019 17:32:23.057659       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1019 17:32:23.658760       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:32:23.664313       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 17:32:23.669331       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 17:32:23.671264       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:32:23.671358       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:32:23.671388       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:32:23.691137       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 17:32:23.744895       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 17:32:23.746678       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:32:23.747538       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:32:23.757762       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:32:23.769848       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 17:32:23.774510       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:32:24.418098       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:32:27.644780       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 17:32:27.870873       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:32:27.906798       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:32:27.920670       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:32:27.940318       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:32:28.023557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.76.78"}
	I1019 17:32:28.077275       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.85.99"}
	I1019 17:32:38.905709       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 17:32:39.099141       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:32:39.272790       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934] <==
	I1019 17:32:39.027370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.063µs"
	I1019 17:32:39.058221       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-k2kx8"
	I1019 17:32:39.068299       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	I1019 17:32:39.130529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="207.129297ms"
	I1019 17:32:39.182781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="264.3509ms"
	I1019 17:32:39.183676       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1019 17:32:39.271900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="141.16136ms"
	I1019 17:32:39.272117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.981µs"
	I1019 17:32:39.272291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.4024ms"
	I1019 17:32:39.272384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.231µs"
	I1019 17:32:39.283399       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:32:39.283504       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:32:39.295206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.884µs"
	I1019 17:32:39.315929       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:32:39.328706       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1019 17:32:39.329947       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1019 17:32:46.291095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.784µs"
	I1019 17:32:47.296943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.466µs"
	I1019 17:32:48.301098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.313µs"
	I1019 17:32:51.334355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.275803ms"
	I1019 17:32:51.335270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.68µs"
	I1019 17:33:04.534451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.678059ms"
	I1019 17:33:04.536054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.403µs"
	I1019 17:33:05.357990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.394µs"
	I1019 17:33:09.439265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.155µs"
	
	
	==> kube-proxy [26fe11e3b4c99f777dd6ff13e00c2520375d45a54af8f47482b753935bdca6c4] <==
	I1019 17:32:26.157644       1 server_others.go:69] "Using iptables proxy"
	I1019 17:32:26.355760       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:32:27.276541       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:32:27.554690       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:32:27.554801       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:32:27.564942       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:32:27.565086       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:32:27.581362       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:32:27.613122       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:32:27.613941       1 config.go:188] "Starting service config controller"
	I1019 17:32:27.614020       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:32:27.614085       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:32:27.614123       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:32:27.614669       1 config.go:315] "Starting node config controller"
	I1019 17:32:27.614736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:32:27.716883       1 shared_informer.go:318] Caches are synced for node config
	I1019 17:32:27.716912       1 shared_informer.go:318] Caches are synced for service config
	I1019 17:32:27.716935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50] <==
	I1019 17:32:23.701124       1 serving.go:348] Generated self-signed cert in-memory
	I1019 17:32:27.751098       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 17:32:27.751132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:32:27.792135       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 17:32:27.792318       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1019 17:32:27.792359       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1019 17:32:27.792414       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1019 17:32:27.795548       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:32:27.795580       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 17:32:27.795604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:32:27.795609       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1019 17:32:27.893901       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1019 17:32:27.897638       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1019 17:32:27.897776       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.225643     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhzl\" (UniqueName: \"kubernetes.io/projected/37171d35-3991-4788-92bd-48a0fb135edf-kube-api-access-szhzl\") pod \"kubernetes-dashboard-8694d4445c-k2kx8\" (UID: \"37171d35-3991-4788-92bd-48a0fb135edf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.226524     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c4q\" (UniqueName: \"kubernetes.io/projected/c7438e11-aa8b-4e74-97c7-9c04ef6c4c07-kube-api-access-b7c4q\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fdfh\" (UID: \"c7438e11-aa8b-4e74-97c7-9c04ef6c4c07\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.226760     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7438e11-aa8b-4e74-97c7-9c04ef6c4c07-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fdfh\" (UID: \"c7438e11-aa8b-4e74-97c7-9c04ef6c4c07\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.231383     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37171d35-3991-4788-92bd-48a0fb135edf-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-k2kx8\" (UID: \"37171d35-3991-4788-92bd-48a0fb135edf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: W1019 17:32:39.538892     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/crio-044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d WatchSource:0}: Error finding container 044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d: Status 404 returned error can't find the container with id 044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d
	Oct 19 17:32:46 old-k8s-version-125363 kubelet[783]: I1019 17:32:46.274328     783 scope.go:117] "RemoveContainer" containerID="5de7e6b7c59fa701637341e8e3d90d1ef84d36b8e222d98fec0462e29d74018d"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: I1019 17:32:47.278809     783 scope.go:117] "RemoveContainer" containerID="5de7e6b7c59fa701637341e8e3d90d1ef84d36b8e222d98fec0462e29d74018d"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: I1019 17:32:47.279206     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: E1019 17:32:47.279479     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:48 old-k8s-version-125363 kubelet[783]: I1019 17:32:48.282670     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:48 old-k8s-version-125363 kubelet[783]: E1019 17:32:48.286324     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:49 old-k8s-version-125363 kubelet[783]: I1019 17:32:49.420316     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:49 old-k8s-version-125363 kubelet[783]: E1019 17:32:49.420611     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:56 old-k8s-version-125363 kubelet[783]: I1019 17:32:56.306969     783 scope.go:117] "RemoveContainer" containerID="bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408"
	Oct 19 17:32:56 old-k8s-version-125363 kubelet[783]: I1019 17:32:56.333084     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8" podStartSLOduration=5.8562756799999995 podCreationTimestamp="2025-10-19 17:32:39 +0000 UTC" firstStartedPulling="2025-10-19 17:32:39.551300829 +0000 UTC m=+23.798947059" lastFinishedPulling="2025-10-19 17:32:51.02624516 +0000 UTC m=+35.273891390" observedRunningTime="2025-10-19 17:32:51.318373186 +0000 UTC m=+35.566019441" watchObservedRunningTime="2025-10-19 17:32:56.331220011 +0000 UTC m=+40.578866241"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.007695     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.337214     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.337422     783 scope.go:117] "RemoveContainer" containerID="9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: E1019 17:33:05.337696     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:33:09 old-k8s-version-125363 kubelet[783]: I1019 17:33:09.419576     783 scope.go:117] "RemoveContainer" containerID="9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	Oct 19 17:33:09 old-k8s-version-125363 kubelet[783]: E1019 17:33:09.420330     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:33:18 old-k8s-version-125363 kubelet[783]: I1019 17:33:18.475825     783 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01d7ad311ee27ef3a024b0e4479aea674714fcb757bf1a7c0706e86d8e1819bc] <==
	2025/10/19 17:32:51 Using namespace: kubernetes-dashboard
	2025/10/19 17:32:51 Using in-cluster config to connect to apiserver
	2025/10/19 17:32:51 Using secret token for csrf signing
	2025/10/19 17:32:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:32:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:32:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 17:32:51 Generating JWE encryption key
	2025/10/19 17:32:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:32:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:32:51 Initializing JWE encryption key from synchronized object
	2025/10/19 17:32:51 Creating in-cluster Sidecar client
	2025/10/19 17:32:51 Serving insecurely on HTTP port: 9090
	2025/10/19 17:32:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:33:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:32:51 Starting overwatch
	
	
	==> storage-provisioner [3f1c54529ea02b321c4155885fdf7f0ab373762c36dbd8b6947f0ec9445bdc3f] <==
	I1019 17:32:56.364141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:32:56.377842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:32:56.378760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:33:13.789056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:33:13.789350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659!
	I1019 17:33:13.791125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc89ac55-acf0-4d8e-a1f1-fca5e969b730", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659 became leader
	I1019 17:33:13.890288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659!
	
	
	==> storage-provisioner [bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408] <==
	I1019 17:32:25.466804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:32:55.551451       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-125363 -n old-k8s-version-125363
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-125363 -n old-k8s-version-125363: exit status 2 (519.253379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-125363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-125363
helpers_test.go:243: (dbg) docker inspect old-k8s-version-125363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	        "Created": "2025-10-19T17:30:37.268621175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:32:06.121848116Z",
	            "FinishedAt": "2025-10-19T17:32:03.943644179Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/hosts",
	        "LogPath": "/var/lib/docker/containers/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18-json.log",
	        "Name": "/old-k8s-version-125363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-125363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-125363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18",
	                "LowerDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98e31fc094fad9154a9e8d4ad13c69ae963a31d8b25a0fac371c82e8a6523c15/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-125363",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-125363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-125363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-125363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "75c7702cf2ecf7dbe9f89ecd1617ed8c066602b44445f0fc55fabed66d881fa4",
	            "SandboxKey": "/var/run/docker/netns/75c7702cf2ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-125363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f1:eb:dc:b6:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c605d5ace27fd5383c607c72991f6fd31798e2bf8285be119b02bf86a3e7e1c",
	                    "EndpointID": "872cbc80b1bb7591adc70973c2ab7a7dd0ed93632f5ee6528ea215a414ea3d84",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-125363",
	                        "7cebf5ae65ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363: exit status 2 (519.329068ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-125363 logs -n 25: (1.384033668s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-953581 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:33:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:33:21.583115  232207 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:33:21.583237  232207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:21.583247  232207 out.go:374] Setting ErrFile to fd 2...
	I1019 17:33:21.583261  232207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:21.583531  232207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:33:21.583954  232207 out.go:368] Setting JSON to false
	I1019 17:33:21.585478  232207 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4549,"bootTime":1760890652,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:33:21.586666  232207 start.go:143] virtualization:  
	I1019 17:33:21.590254  232207 out.go:179] * [no-preload-038781] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:33:21.594202  232207 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:33:21.594387  232207 notify.go:221] Checking for updates...
	I1019 17:33:21.602746  232207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:33:21.605863  232207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:21.608907  232207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:33:21.612368  232207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:33:21.615672  232207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:33:21.619163  232207 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:21.619746  232207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:33:21.684317  232207 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:33:21.684443  232207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:21.784654  232207 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:33:21.774398909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:21.784768  232207 docker.go:319] overlay module found
	I1019 17:33:21.787848  232207 out.go:179] * Using the docker driver based on existing profile
	I1019 17:33:21.791523  232207 start.go:309] selected driver: docker
	I1019 17:33:21.791544  232207 start.go:930] validating driver "docker" against &{Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:21.791637  232207 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:33:21.792296  232207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:21.887480  232207 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:33:21.876242023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:21.887819  232207 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:33:21.887850  232207 cni.go:84] Creating CNI manager for ""
	I1019 17:33:21.887902  232207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:21.887948  232207 start.go:353] cluster config:
	{Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:21.892729  232207 out.go:179] * Starting "no-preload-038781" primary control-plane node in "no-preload-038781" cluster
	I1019 17:33:21.895587  232207 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:33:21.898600  232207 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:33:21.901415  232207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:21.901584  232207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/config.json ...
	I1019 17:33:21.901914  232207 cache.go:107] acquiring lock: {Name:mk360bbbbbfbc6c04e1d6fb1ecb6d8ef11dacfae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902000  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 17:33:21.902013  232207 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 111.477µs
	I1019 17:33:21.902022  232207 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 17:33:21.902034  232207 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:33:21.902229  232207 cache.go:107] acquiring lock: {Name:mka8b4b5ce05ab7738100522d589edc05243a365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902297  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 17:33:21.902310  232207 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 87.165µs
	I1019 17:33:21.902318  232207 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 17:33:21.902329  232207 cache.go:107] acquiring lock: {Name:mk550c2dc21a4e753a37654dd66898a22aea8501 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902364  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 17:33:21.902373  232207 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 45.835µs
	I1019 17:33:21.902380  232207 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 17:33:21.902389  232207 cache.go:107] acquiring lock: {Name:mk532a0abee835d134bf84d40aeedc76b9b30236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902421  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 17:33:21.902430  232207 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 41.526µs
	I1019 17:33:21.902436  232207 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 17:33:21.902445  232207 cache.go:107] acquiring lock: {Name:mke9bc1b713c3855333e6f7bb7fa875f711fa1bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902472  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 17:33:21.902482  232207 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 37.318µs
	I1019 17:33:21.902497  232207 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 17:33:21.902512  232207 cache.go:107] acquiring lock: {Name:mka735caca8cac3a7c200d692aa0b29def8fa76b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902637  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 17:33:21.902648  232207 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 136.601µs
	I1019 17:33:21.902654  232207 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 17:33:21.902665  232207 cache.go:107] acquiring lock: {Name:mkb145c20e0b0892f39a24c95c4c4c51d2d205bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902700  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1019 17:33:21.902708  232207 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 45.096µs
	I1019 17:33:21.902720  232207 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 17:33:21.902736  232207 cache.go:107] acquiring lock: {Name:mk3b9fe16e771671ee7b5735238b50a7645b3529 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.902763  232207 cache.go:115] /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 17:33:21.902772  232207 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 36.825µs
	I1019 17:33:21.902778  232207 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 17:33:21.902785  232207 cache.go:87] Successfully saved all images to host disk.
	I1019 17:33:21.942308  232207 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:33:21.942330  232207 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:33:21.942343  232207 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:33:21.942365  232207 start.go:360] acquireMachinesLock for no-preload-038781: {Name:mk4cfad425e5ad0a11c8b3ca794ebd573c4f0113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:21.942417  232207 start.go:364] duration metric: took 36.784µs to acquireMachinesLock for "no-preload-038781"
	I1019 17:33:21.942441  232207 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:33:21.942446  232207 fix.go:54] fixHost starting: 
	I1019 17:33:21.942730  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:21.964876  232207 fix.go:112] recreateIfNeeded on no-preload-038781: state=Stopped err=<nil>
	W1019 17:33:21.964908  232207 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.009137887Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=19abcfc1-5753-4672-b122-55f6cae63479 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.011110693Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d6eced28-903f-44e9-a52a-84a2d0e27f53 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.012812768Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=72f52115-a5e0-455d-b241-370af5cbf70a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.013197356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.021257308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.021835951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.039662515Z" level=info msg="Created container 9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=72f52115-a5e0-455d-b241-370af5cbf70a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.040842677Z" level=info msg="Starting container: 9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa" id=81d66acd-4fb2-4b99-b2f1-c2b0a0dc7dc2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.043743367Z" level=info msg="Started container" PID=1641 containerID=9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper id=81d66acd-4fb2-4b99-b2f1-c2b0a0dc7dc2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a9ecf9a7e2d8e2da6ca01bb18add06b0cc723f3f77de8be9beeebcc58d37b86
	Oct 19 17:33:05 old-k8s-version-125363 conmon[1639]: conmon 9ae1da96d5ae4b025341 <ninfo>: container 1641 exited with status 1
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.166433298Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.172932461Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.172982554Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.173008097Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.176956345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.176993687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.177018401Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180734365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180775884Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.180802666Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.185190551Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.185228837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.340537648Z" level=info msg="Removing container: de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.351938649Z" level=info msg="Error loading conmon cgroup of container de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a: cgroup deleted" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:33:05 old-k8s-version-125363 crio[655]: time="2025-10-19T17:33:05.359973821Z" level=info msg="Removed container de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh/dashboard-metrics-scraper" id=8088f337-ae50-42f2-8f5a-29e0d88b478e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9ae1da96d5ae4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   6a9ecf9a7e2d8       dashboard-metrics-scraper-5f989dc9cf-7fdfh       kubernetes-dashboard
	3f1c54529ea02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   74e1765243327       storage-provisioner                              kube-system
	01d7ad311ee27       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   044e3995536a0       kubernetes-dashboard-8694d4445c-k2kx8            kubernetes-dashboard
	ece679b27632a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   e6008e5fc42ef       coredns-5dd5756b68-28psj                         kube-system
	c2f952e5b8bc3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   c97ffc7d9f275       busybox                                          default
	26fe11e3b4c99       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   5082b912d6b13       kube-proxy-zjv4r                                 kube-system
	9ef8929ec3547       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   617661470d9b1       kindnet-sgp8p                                    kube-system
	bd18b316c2a47       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   74e1765243327       storage-provisioner                              kube-system
	3c55bfaecaef6       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   41c0239bcf3d6       kube-apiserver-old-k8s-version-125363            kube-system
	d959f3fa938ff       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   24e7cc6438429       kube-controller-manager-old-k8s-version-125363   kube-system
	1fc58fbce400e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   87df55e98d3ea       etcd-old-k8s-version-125363                      kube-system
	197ecf5596167       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   364b172b41d9f       kube-scheduler-old-k8s-version-125363            kube-system
	
	
	==> coredns [ece679b27632a8e593d7fdf65a30b812a5e5883e49838353a369056eb0d077c4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42483 - 3076 "HINFO IN 1635565176832147072.3938495347753501213. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021859319s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-125363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-125363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=old-k8s-version-125363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_31_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:31:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-125363
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:32:54 +0000   Sun, 19 Oct 2025 17:31:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-125363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ae1e6c1c-619e-4a12-af9f-474dab50c58c
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-5dd5756b68-28psj                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-old-k8s-version-125363                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m16s
	  kube-system                 kindnet-sgp8p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-old-k8s-version-125363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-old-k8s-version-125363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-zjv4r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-old-k8s-version-125363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fdfh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-k2kx8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s (x8 over 2m25s)  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m15s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m15s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m15s                  kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m3s                   node-controller  Node old-k8s-version-125363 event: Registered Node old-k8s-version-125363 in Controller
	  Normal  NodeReady                108s                   kubelet          Node old-k8s-version-125363 status is now: NodeReady
	  Normal  Starting                 68s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node old-k8s-version-125363 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-125363 event: Registered Node old-k8s-version-125363 in Controller
	
	
	==> dmesg <==
	[Oct19 17:09] overlayfs: idmapped layers are currently not supported
	[ +28.820689] overlayfs: idmapped layers are currently not supported
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1fc58fbce400e6ef28650fd5f0e0edaa142b9b5f7c281501ecbc55ed3dd3e00d] <==
	{"level":"info","ts":"2025-10-19T17:32:17.158705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:32:17.184758Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T17:32:17.182488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-19T17:32:17.184663Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:32:17.185146Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:32:17.185184Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:32:17.184708Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:32:17.185217Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T17:32:17.19924Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T17:32:17.199478Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:32:17.199577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:32:18.905635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.90594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.906222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T17:32:18.906433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.906952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T17:32:18.923323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:32:18.924635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T17:32:18.925009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:32:18.940279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T17:32:18.945493Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-125363 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:32:18.966693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:32:18.966805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:33:23 up  1:15,  0 user,  load average: 3.56, 3.88, 3.40
	Linux old-k8s-version-125363 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9ef8929ec3547c8d7ccefe3c6cab404d96aa55f957ba041fbdbb09381cb26b3f] <==
	I1019 17:32:24.820732       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:32:24.831175       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:32:24.842119       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:32:24.842142       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:32:24.842170       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:32:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:32:25.166991       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:32:25.167082       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:32:25.167134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:32:25.168211       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:32:55.167145       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:32:55.168327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:32:55.168327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:32:55.168520       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:32:56.667950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:32:56.667985       1 metrics.go:72] Registering metrics
	I1019 17:32:56.668073       1 controller.go:711] "Syncing nftables rules"
	I1019 17:33:05.166131       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:33:05.166186       1 main.go:301] handling current node
	I1019 17:33:15.170307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:33:15.170349       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c55bfaecaef635657a94348a5e34566add59da36166b771bc7f67010edd9cce] <==
	I1019 17:32:23.057659       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1019 17:32:23.658760       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:32:23.664313       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 17:32:23.669331       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 17:32:23.671264       1 aggregator.go:166] initial CRD sync complete...
	I1019 17:32:23.671358       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 17:32:23.671388       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:32:23.691137       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 17:32:23.744895       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 17:32:23.746678       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 17:32:23.747538       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 17:32:23.757762       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:32:23.769848       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 17:32:23.774510       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:32:24.418098       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:32:27.644780       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 17:32:27.870873       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 17:32:27.906798       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:32:27.920670       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:32:27.940318       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 17:32:28.023557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.76.78"}
	I1019 17:32:28.077275       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.85.99"}
	I1019 17:32:38.905709       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 17:32:39.099141       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:32:39.272790       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d959f3fa938ffb70285c4fe006b5ec8e4f7b88315257a5e8629229ec663ed934] <==
	I1019 17:32:39.027370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.063µs"
	I1019 17:32:39.058221       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-k2kx8"
	I1019 17:32:39.068299       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	I1019 17:32:39.130529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="207.129297ms"
	I1019 17:32:39.182781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="264.3509ms"
	I1019 17:32:39.183676       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1019 17:32:39.271900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="141.16136ms"
	I1019 17:32:39.272117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.981µs"
	I1019 17:32:39.272291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.4024ms"
	I1019 17:32:39.272384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.231µs"
	I1019 17:32:39.283399       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:32:39.283504       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 17:32:39.295206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.884µs"
	I1019 17:32:39.315929       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 17:32:39.328706       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1019 17:32:39.329947       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1019 17:32:46.291095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.784µs"
	I1019 17:32:47.296943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.466µs"
	I1019 17:32:48.301098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.313µs"
	I1019 17:32:51.334355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.275803ms"
	I1019 17:32:51.335270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.68µs"
	I1019 17:33:04.534451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.678059ms"
	I1019 17:33:04.536054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.403µs"
	I1019 17:33:05.357990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.394µs"
	I1019 17:33:09.439265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.155µs"
	
	
	==> kube-proxy [26fe11e3b4c99f777dd6ff13e00c2520375d45a54af8f47482b753935bdca6c4] <==
	I1019 17:32:26.157644       1 server_others.go:69] "Using iptables proxy"
	I1019 17:32:26.355760       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 17:32:27.276541       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:32:27.554690       1 server_others.go:152] "Using iptables Proxier"
	I1019 17:32:27.554801       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 17:32:27.564942       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 17:32:27.565086       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 17:32:27.581362       1 server.go:846] "Version info" version="v1.28.0"
	I1019 17:32:27.613122       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:32:27.613941       1 config.go:188] "Starting service config controller"
	I1019 17:32:27.614020       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 17:32:27.614085       1 config.go:97] "Starting endpoint slice config controller"
	I1019 17:32:27.614123       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 17:32:27.614669       1 config.go:315] "Starting node config controller"
	I1019 17:32:27.614736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 17:32:27.716883       1 shared_informer.go:318] Caches are synced for node config
	I1019 17:32:27.716912       1 shared_informer.go:318] Caches are synced for service config
	I1019 17:32:27.716935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [197ecf559616738c132d97a47e273cc3f3fba72a3ba90d7e2be8660caee32f50] <==
	I1019 17:32:23.701124       1 serving.go:348] Generated self-signed cert in-memory
	I1019 17:32:27.751098       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 17:32:27.751132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:32:27.792135       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 17:32:27.792318       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1019 17:32:27.792359       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1019 17:32:27.792414       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1019 17:32:27.795548       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:32:27.795580       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 17:32:27.795604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:32:27.795609       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1019 17:32:27.893901       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1019 17:32:27.897638       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1019 17:32:27.897776       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.225643     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhzl\" (UniqueName: \"kubernetes.io/projected/37171d35-3991-4788-92bd-48a0fb135edf-kube-api-access-szhzl\") pod \"kubernetes-dashboard-8694d4445c-k2kx8\" (UID: \"37171d35-3991-4788-92bd-48a0fb135edf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.226524     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c4q\" (UniqueName: \"kubernetes.io/projected/c7438e11-aa8b-4e74-97c7-9c04ef6c4c07-kube-api-access-b7c4q\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fdfh\" (UID: \"c7438e11-aa8b-4e74-97c7-9c04ef6c4c07\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.226760     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7438e11-aa8b-4e74-97c7-9c04ef6c4c07-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fdfh\" (UID: \"c7438e11-aa8b-4e74-97c7-9c04ef6c4c07\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: I1019 17:32:39.231383     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37171d35-3991-4788-92bd-48a0fb135edf-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-k2kx8\" (UID: \"37171d35-3991-4788-92bd-48a0fb135edf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8"
	Oct 19 17:32:39 old-k8s-version-125363 kubelet[783]: W1019 17:32:39.538892     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7cebf5ae65accddaa2e1fb456fc8de4ee04c29044d83dc53a21cc82868af5f18/crio-044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d WatchSource:0}: Error finding container 044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d: Status 404 returned error can't find the container with id 044e3995536a0771e771f07ea22e190cec6fba9f356d0a3d92d87bfb7ab82e0d
	Oct 19 17:32:46 old-k8s-version-125363 kubelet[783]: I1019 17:32:46.274328     783 scope.go:117] "RemoveContainer" containerID="5de7e6b7c59fa701637341e8e3d90d1ef84d36b8e222d98fec0462e29d74018d"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: I1019 17:32:47.278809     783 scope.go:117] "RemoveContainer" containerID="5de7e6b7c59fa701637341e8e3d90d1ef84d36b8e222d98fec0462e29d74018d"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: I1019 17:32:47.279206     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:47 old-k8s-version-125363 kubelet[783]: E1019 17:32:47.279479     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:48 old-k8s-version-125363 kubelet[783]: I1019 17:32:48.282670     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:48 old-k8s-version-125363 kubelet[783]: E1019 17:32:48.286324     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:49 old-k8s-version-125363 kubelet[783]: I1019 17:32:49.420316     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:32:49 old-k8s-version-125363 kubelet[783]: E1019 17:32:49.420611     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:32:56 old-k8s-version-125363 kubelet[783]: I1019 17:32:56.306969     783 scope.go:117] "RemoveContainer" containerID="bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408"
	Oct 19 17:32:56 old-k8s-version-125363 kubelet[783]: I1019 17:32:56.333084     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k2kx8" podStartSLOduration=5.8562756799999995 podCreationTimestamp="2025-10-19 17:32:39 +0000 UTC" firstStartedPulling="2025-10-19 17:32:39.551300829 +0000 UTC m=+23.798947059" lastFinishedPulling="2025-10-19 17:32:51.02624516 +0000 UTC m=+35.273891390" observedRunningTime="2025-10-19 17:32:51.318373186 +0000 UTC m=+35.566019441" watchObservedRunningTime="2025-10-19 17:32:56.331220011 +0000 UTC m=+40.578866241"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.007695     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.337214     783 scope.go:117] "RemoveContainer" containerID="de7bffe76fa09706bb2c1eb663d3fe6f87d32e7fbb5b55a6a823de18645e7b3a"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: I1019 17:33:05.337422     783 scope.go:117] "RemoveContainer" containerID="9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	Oct 19 17:33:05 old-k8s-version-125363 kubelet[783]: E1019 17:33:05.337696     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:33:09 old-k8s-version-125363 kubelet[783]: I1019 17:33:09.419576     783 scope.go:117] "RemoveContainer" containerID="9ae1da96d5ae4b025341e1d50f8da02b6a7683c46ab2a07a48d5cc2cb2e0c6aa"
	Oct 19 17:33:09 old-k8s-version-125363 kubelet[783]: E1019 17:33:09.420330     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fdfh_kubernetes-dashboard(c7438e11-aa8b-4e74-97c7-9c04ef6c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fdfh" podUID="c7438e11-aa8b-4e74-97c7-9c04ef6c4c07"
	Oct 19 17:33:18 old-k8s-version-125363 kubelet[783]: I1019 17:33:18.475825     783 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:33:18 old-k8s-version-125363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01d7ad311ee27ef3a024b0e4479aea674714fcb757bf1a7c0706e86d8e1819bc] <==
	2025/10/19 17:32:51 Using namespace: kubernetes-dashboard
	2025/10/19 17:32:51 Using in-cluster config to connect to apiserver
	2025/10/19 17:32:51 Using secret token for csrf signing
	2025/10/19 17:32:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:32:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:32:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 17:32:51 Generating JWE encryption key
	2025/10/19 17:32:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:32:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:32:51 Initializing JWE encryption key from synchronized object
	2025/10/19 17:32:51 Creating in-cluster Sidecar client
	2025/10/19 17:32:51 Serving insecurely on HTTP port: 9090
	2025/10/19 17:32:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:33:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:32:51 Starting overwatch
	
	
	==> storage-provisioner [3f1c54529ea02b321c4155885fdf7f0ab373762c36dbd8b6947f0ec9445bdc3f] <==
	I1019 17:32:56.364141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:32:56.377842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:32:56.378760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 17:33:13.789056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:33:13.789350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659!
	I1019 17:33:13.791125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc89ac55-acf0-4d8e-a1f1-fca5e969b730", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659 became leader
	I1019 17:33:13.890288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-125363_f59b65ce-2484-48b5-89fb-5776aa8e9659!
	
	
	==> storage-provisioner [bd18b316c2a475ead84f1e6fa45e355d643a387c9a6060c8b54a84a10f5a3408] <==
	I1019 17:32:25.466804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:32:55.551451       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-125363 -n old-k8s-version-125363
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-125363 -n old-k8s-version-125363: exit status 2 (359.216588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-125363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-038781 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-038781 --alsologtostderr -v=1: exit status 80 (1.958095277s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-038781 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:34:26.387330  237648 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:34:26.387524  237648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:34:26.387537  237648 out.go:374] Setting ErrFile to fd 2...
	I1019 17:34:26.387542  237648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:34:26.387835  237648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:34:26.388140  237648 out.go:368] Setting JSON to false
	I1019 17:34:26.388177  237648 mustload.go:66] Loading cluster: no-preload-038781
	I1019 17:34:26.388633  237648 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:34:26.389182  237648 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:34:26.408506  237648 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:34:26.408891  237648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:34:26.469020  237648 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-19 17:34:26.459654199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:34:26.469716  237648 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-038781 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:34:26.476789  237648 out.go:179] * Pausing node no-preload-038781 ... 
	I1019 17:34:26.479720  237648 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:34:26.480079  237648 ssh_runner.go:195] Run: systemctl --version
	I1019 17:34:26.480122  237648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:34:26.498027  237648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:34:26.610458  237648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:34:26.627348  237648 pause.go:52] kubelet running: true
	I1019 17:34:26.627428  237648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:34:26.889868  237648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:34:26.889962  237648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:34:26.972119  237648 cri.go:89] found id: "d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8"
	I1019 17:34:26.972145  237648 cri.go:89] found id: "1c6f01729c8ea65f68f7c74cd0edce25f7839aa8e906e5eaaf9f59dea56c3592"
	I1019 17:34:26.972149  237648 cri.go:89] found id: "7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda"
	I1019 17:34:26.972153  237648 cri.go:89] found id: "aa2e6a947fb42538c3f95b4e424f09d0784485f208dbe2872cdb5a5c87988222"
	I1019 17:34:26.972156  237648 cri.go:89] found id: "63a21cb0dd8ac64312c63edbf6eba4361cba29f0413fe4f5a288ccef35e3d0a1"
	I1019 17:34:26.972160  237648 cri.go:89] found id: "4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0"
	I1019 17:34:26.972163  237648 cri.go:89] found id: "2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4"
	I1019 17:34:26.972166  237648 cri.go:89] found id: "0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef"
	I1019 17:34:26.972170  237648 cri.go:89] found id: "536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99"
	I1019 17:34:26.972176  237648 cri.go:89] found id: "4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	I1019 17:34:26.972179  237648 cri.go:89] found id: "8716b30ad849506fd3f8f4715e585b04ced2a15cf9ed5a6881825f2a54647510"
	I1019 17:34:26.972183  237648 cri.go:89] found id: ""
	I1019 17:34:26.972229  237648 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:34:26.984025  237648 retry.go:31] will retry after 222.113959ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:34:26Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:34:27.206372  237648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:34:27.219715  237648 pause.go:52] kubelet running: false
	I1019 17:34:27.219797  237648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:34:27.381874  237648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:34:27.382016  237648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:34:27.459286  237648 cri.go:89] found id: "d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8"
	I1019 17:34:27.459347  237648 cri.go:89] found id: "1c6f01729c8ea65f68f7c74cd0edce25f7839aa8e906e5eaaf9f59dea56c3592"
	I1019 17:34:27.459365  237648 cri.go:89] found id: "7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda"
	I1019 17:34:27.459383  237648 cri.go:89] found id: "aa2e6a947fb42538c3f95b4e424f09d0784485f208dbe2872cdb5a5c87988222"
	I1019 17:34:27.459401  237648 cri.go:89] found id: "63a21cb0dd8ac64312c63edbf6eba4361cba29f0413fe4f5a288ccef35e3d0a1"
	I1019 17:34:27.459428  237648 cri.go:89] found id: "4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0"
	I1019 17:34:27.459445  237648 cri.go:89] found id: "2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4"
	I1019 17:34:27.459463  237648 cri.go:89] found id: "0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef"
	I1019 17:34:27.459484  237648 cri.go:89] found id: "536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99"
	I1019 17:34:27.459505  237648 cri.go:89] found id: "4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	I1019 17:34:27.459523  237648 cri.go:89] found id: "8716b30ad849506fd3f8f4715e585b04ced2a15cf9ed5a6881825f2a54647510"
	I1019 17:34:27.459541  237648 cri.go:89] found id: ""
	I1019 17:34:27.459628  237648 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:34:27.470818  237648 retry.go:31] will retry after 538.172741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:34:27Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:34:28.009583  237648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:34:28.023872  237648 pause.go:52] kubelet running: false
	I1019 17:34:28.023984  237648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:34:28.196112  237648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:34:28.196214  237648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:34:28.261576  237648 cri.go:89] found id: "d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8"
	I1019 17:34:28.261641  237648 cri.go:89] found id: "1c6f01729c8ea65f68f7c74cd0edce25f7839aa8e906e5eaaf9f59dea56c3592"
	I1019 17:34:28.261661  237648 cri.go:89] found id: "7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda"
	I1019 17:34:28.261681  237648 cri.go:89] found id: "aa2e6a947fb42538c3f95b4e424f09d0784485f208dbe2872cdb5a5c87988222"
	I1019 17:34:28.261700  237648 cri.go:89] found id: "63a21cb0dd8ac64312c63edbf6eba4361cba29f0413fe4f5a288ccef35e3d0a1"
	I1019 17:34:28.261729  237648 cri.go:89] found id: "4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0"
	I1019 17:34:28.261746  237648 cri.go:89] found id: "2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4"
	I1019 17:34:28.261764  237648 cri.go:89] found id: "0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef"
	I1019 17:34:28.261782  237648 cri.go:89] found id: "536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99"
	I1019 17:34:28.261821  237648 cri.go:89] found id: "4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	I1019 17:34:28.261847  237648 cri.go:89] found id: "8716b30ad849506fd3f8f4715e585b04ced2a15cf9ed5a6881825f2a54647510"
	I1019 17:34:28.261864  237648 cri.go:89] found id: ""
	I1019 17:34:28.261944  237648 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:34:28.276080  237648 out.go:203] 
	W1019 17:34:28.278899  237648 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:34:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:34:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:34:28.278919  237648 out.go:285] * 
	* 
	W1019 17:34:28.283885  237648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:34:28.286985  237648 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-038781 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-038781
helpers_test.go:243: (dbg) docker inspect no-preload-038781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	        "Created": "2025-10-19T17:31:51.406561575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:33:22.007891381Z",
	            "FinishedAt": "2025-10-19T17:33:20.927764282Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hosts",
	        "LogPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc-json.log",
	        "Name": "/no-preload-038781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-038781:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-038781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	                "LowerDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-038781",
	                "Source": "/var/lib/docker/volumes/no-preload-038781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-038781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-038781",
	                "name.minikube.sigs.k8s.io": "no-preload-038781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b628041d5d4a3e0351fb5578481d9491ab91da8c6997622c33fc2966be9092a8",
	            "SandboxKey": "/var/run/docker/netns/b628041d5d4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-038781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:66:61:ca:41:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b07775101cd68c8ddd9de09f237af6ede6d8644dfb4bb5013ca32815c3f150a",
	                    "EndpointID": "64ae0bcdb69a4f7f287915acb47c7230dd64c468a7d59c619d01fd40a797fab4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-038781",
	                        "4de6d765b1ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781: exit status 2 (345.082285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-038781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-038781 logs -n 25: (1.304228118s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314     │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:33:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:33:28.277182  233919 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:33:28.277335  233919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:28.277341  233919 out.go:374] Setting ErrFile to fd 2...
	I1019 17:33:28.277346  233919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:28.277617  233919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:33:28.278089  233919 out.go:368] Setting JSON to false
	I1019 17:33:28.278997  233919 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4556,"bootTime":1760890652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:33:28.279071  233919 start.go:143] virtualization:  
	I1019 17:33:28.282664  233919 out.go:179] * [embed-certs-296314] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:33:28.285888  233919 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:33:28.285955  233919 notify.go:221] Checking for updates...
	I1019 17:33:28.291964  233919 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:33:28.294858  233919 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:28.298600  233919 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:33:28.304040  233919 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:33:28.306995  233919 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:33:28.310377  233919 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:28.310478  233919 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:33:28.346666  233919 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:33:28.346793  233919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:28.460375  233919 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:33:28.424716141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:28.460485  233919 docker.go:319] overlay module found
	I1019 17:33:28.463638  233919 out.go:179] * Using the docker driver based on user configuration
	I1019 17:33:28.466605  233919 start.go:309] selected driver: docker
	I1019 17:33:28.466628  233919 start.go:930] validating driver "docker" against <nil>
	I1019 17:33:28.466641  233919 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:33:28.467352  233919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:28.563983  233919 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:33:28.553551864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:28.564131  233919 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:33:28.564350  233919 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:33:28.567264  233919 out.go:179] * Using Docker driver with root privileges
	I1019 17:33:28.570129  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:33:28.570190  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:28.570197  233919 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:33:28.570277  233919 start.go:353] cluster config:
	{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:28.573232  233919 out.go:179] * Starting "embed-certs-296314" primary control-plane node in "embed-certs-296314" cluster
	I1019 17:33:28.576068  233919 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:33:28.579012  233919 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:33:28.581820  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:28.581879  233919 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:33:28.581889  233919 cache.go:59] Caching tarball of preloaded images
	I1019 17:33:28.581969  233919 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:33:28.581977  233919 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:33:28.582106  233919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:33:28.582124  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json: {Name:mk36693101c8fc969669726520164b9d80aaac03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:28.582290  233919 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:33:28.610893  233919 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:33:28.610914  233919 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:33:28.610936  233919 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:33:28.610962  233919 start.go:360] acquireMachinesLock for embed-certs-296314: {Name:mkbadf116eb8b8b2fc66452f2f3b93b38bb1a004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:28.611063  233919 start.go:364] duration metric: took 86.573µs to acquireMachinesLock for "embed-certs-296314"
	I1019 17:33:28.611093  233919 start.go:93] Provisioning new machine with config: &{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:33:28.611182  233919 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:33:27.003704  232207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:33:27.003732  232207 machine.go:97] duration metric: took 4.577520969s to provisionDockerMachine
	I1019 17:33:27.003762  232207 start.go:293] postStartSetup for "no-preload-038781" (driver="docker")
	I1019 17:33:27.003776  232207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:33:27.003859  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:33:27.003906  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.030344  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.148305  232207 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:33:27.152118  232207 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:33:27.152144  232207 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:33:27.152155  232207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:33:27.152231  232207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:33:27.152306  232207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:33:27.152404  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:33:27.161123  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:27.182062  232207 start.go:296] duration metric: took 178.282871ms for postStartSetup
	I1019 17:33:27.182145  232207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:33:27.182200  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.208174  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.316528  232207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:33:27.323438  232207 fix.go:56] duration metric: took 5.38097894s for fixHost
	I1019 17:33:27.323461  232207 start.go:83] releasing machines lock for "no-preload-038781", held for 5.381035581s
	I1019 17:33:27.323539  232207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-038781
	I1019 17:33:27.346441  232207 ssh_runner.go:195] Run: cat /version.json
	I1019 17:33:27.346515  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.346863  232207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:33:27.346942  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.383616  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.396428  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.487262  232207 ssh_runner.go:195] Run: systemctl --version
	I1019 17:33:27.611746  232207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:33:27.697901  232207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:33:27.703234  232207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:33:27.703302  232207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:33:27.714224  232207 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:33:27.714246  232207 start.go:496] detecting cgroup driver to use...
	I1019 17:33:27.714277  232207 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:33:27.714318  232207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:33:27.732129  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:33:27.747396  232207 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:33:27.747468  232207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:33:27.763558  232207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:33:27.779993  232207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:33:28.022154  232207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:33:28.217997  232207 docker.go:234] disabling docker service ...
	I1019 17:33:28.218085  232207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:33:28.239387  232207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:33:28.255875  232207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:33:28.436264  232207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:33:28.598736  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:33:28.612566  232207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:33:28.629979  232207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:33:28.630037  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.641347  232207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:33:28.641408  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.651326  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.663334  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.674368  232207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:33:28.684633  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.695620  232207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.714004  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.726350  232207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:33:28.737644  232207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:33:28.747093  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:28.914688  232207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:33:29.092410  232207 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:33:29.092477  232207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:33:29.098362  232207 start.go:564] Will wait 60s for crictl version
	I1019 17:33:29.098421  232207 ssh_runner.go:195] Run: which crictl
	I1019 17:33:29.102440  232207 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:33:29.135221  232207 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:33:29.135297  232207 ssh_runner.go:195] Run: crio --version
	I1019 17:33:29.194932  232207 ssh_runner.go:195] Run: crio --version
	I1019 17:33:29.236198  232207 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:33:29.239136  232207 cli_runner.go:164] Run: docker network inspect no-preload-038781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:29.261350  232207 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:33:29.265130  232207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:29.321790  232207 kubeadm.go:884] updating cluster {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:33:29.321912  232207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:29.321951  232207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:29.384388  232207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:29.384409  232207 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:33:29.384417  232207 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:33:29.384518  232207 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-038781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:33:29.384619  232207 ssh_runner.go:195] Run: crio config
	I1019 17:33:29.472761  232207 cni.go:84] Creating CNI manager for ""
	I1019 17:33:29.472825  232207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:29.472862  232207 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:33:29.472906  232207 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-038781 NodeName:no-preload-038781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:33:29.473061  232207 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-038781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:33:29.473148  232207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:33:29.492477  232207 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:33:29.492558  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:33:29.504920  232207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:33:29.520982  232207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:33:29.536740  232207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 17:33:29.558231  232207 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:33:29.569275  232207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:29.581524  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:29.745251  232207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:29.761260  232207 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781 for IP: 192.168.76.2
	I1019 17:33:29.761320  232207 certs.go:195] generating shared ca certs ...
	I1019 17:33:29.761352  232207 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:29.761518  232207 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:33:29.761590  232207 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:33:29.761612  232207 certs.go:257] generating profile certs ...
	I1019 17:33:29.761730  232207 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key
	I1019 17:33:29.761844  232207 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d
	I1019 17:33:29.761910  232207 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key
	I1019 17:33:29.762055  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:33:29.762122  232207 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:33:29.762158  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:33:29.762208  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:33:29.762262  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:33:29.762316  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:33:29.762399  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:29.763053  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:33:29.797012  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:33:29.829624  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:33:29.858887  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:33:29.885905  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:33:29.912896  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:33:29.967935  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:33:29.993770  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:33:30.071324  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:33:30.109539  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:33:30.136824  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:33:30.158664  232207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:33:30.175070  232207 ssh_runner.go:195] Run: openssl version
	I1019 17:33:30.184499  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:33:30.194973  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.199385  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.199501  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.241939  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:33:30.251019  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:33:30.260429  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.268967  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.269068  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.311217  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:33:30.319197  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:33:30.327472  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.332255  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.332379  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.377517  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:33:30.385528  232207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:33:30.389951  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:33:30.432703  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:33:30.475365  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:33:30.543147  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:33:30.635022  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:33:30.758163  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:33:30.868818  232207 kubeadm.go:401] StartCluster: {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:30.868913  232207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:33:30.868999  232207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:33:30.948539  232207 cri.go:89] found id: "4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0"
	I1019 17:33:30.948599  232207 cri.go:89] found id: "2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4"
	I1019 17:33:30.948621  232207 cri.go:89] found id: "0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef"
	I1019 17:33:30.948642  232207 cri.go:89] found id: "536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99"
	I1019 17:33:30.948660  232207 cri.go:89] found id: ""
	I1019 17:33:30.948779  232207 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:33:30.980090  232207 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:33:30.980227  232207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:33:30.989084  232207 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:33:30.989160  232207 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:33:30.989242  232207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:33:30.997384  232207 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:33:30.997857  232207 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-038781" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:30.998011  232207 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-038781" cluster setting kubeconfig missing "no-preload-038781" context setting]
	I1019 17:33:30.998372  232207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:30.999934  232207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:33:31.028973  232207 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:33:31.029052  232207 kubeadm.go:602] duration metric: took 39.863988ms to restartPrimaryControlPlane
	I1019 17:33:31.029076  232207 kubeadm.go:403] duration metric: took 160.268431ms to StartCluster
	I1019 17:33:31.029129  232207 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:31.029210  232207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:31.029835  232207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:31.030087  232207 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:33:31.030435  232207 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:31.030495  232207 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:33:31.030647  232207 addons.go:70] Setting storage-provisioner=true in profile "no-preload-038781"
	I1019 17:33:31.030666  232207 addons.go:239] Setting addon storage-provisioner=true in "no-preload-038781"
	W1019 17:33:31.030677  232207 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:33:31.030699  232207 addons.go:70] Setting dashboard=true in profile "no-preload-038781"
	I1019 17:33:31.030736  232207 addons.go:239] Setting addon dashboard=true in "no-preload-038781"
	W1019 17:33:31.030756  232207 addons.go:248] addon dashboard should already be in state true
	I1019 17:33:31.030789  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.030701  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.031317  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.031356  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.030708  232207 addons.go:70] Setting default-storageclass=true in profile "no-preload-038781"
	I1019 17:33:31.031907  232207 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-038781"
	I1019 17:33:31.032174  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.036955  232207 out.go:179] * Verifying Kubernetes components...
	I1019 17:33:31.040179  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:31.072891  232207 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:33:31.077971  232207 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:33:31.077994  232207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:33:31.078064  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.093532  232207 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:33:31.100404  232207 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:33:31.103284  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:33:31.103307  232207 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:33:31.103376  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.111488  232207 addons.go:239] Setting addon default-storageclass=true in "no-preload-038781"
	W1019 17:33:31.111512  232207 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:33:31.111537  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.111957  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.137971  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.162078  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.163252  232207 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:33:31.163280  232207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:33:31.163341  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.189770  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.485180  232207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:31.544968  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:33:31.544994  232207 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:33:31.561608  232207 node_ready.go:35] waiting up to 6m0s for node "no-preload-038781" to be "Ready" ...
	I1019 17:33:28.615089  233919 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:33:28.615344  233919 start.go:159] libmachine.API.Create for "embed-certs-296314" (driver="docker")
	I1019 17:33:28.615396  233919 client.go:171] LocalClient.Create starting
	I1019 17:33:28.615463  233919 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:33:28.615503  233919 main.go:143] libmachine: Decoding PEM data...
	I1019 17:33:28.615522  233919 main.go:143] libmachine: Parsing certificate...
	I1019 17:33:28.615603  233919 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:33:28.615631  233919 main.go:143] libmachine: Decoding PEM data...
	I1019 17:33:28.615645  233919 main.go:143] libmachine: Parsing certificate...
	I1019 17:33:28.616069  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:33:28.636317  233919 cli_runner.go:211] docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:33:28.636400  233919 network_create.go:284] running [docker network inspect embed-certs-296314] to gather additional debugging logs...
	I1019 17:33:28.636421  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314
	W1019 17:33:28.656331  233919 cli_runner.go:211] docker network inspect embed-certs-296314 returned with exit code 1
	I1019 17:33:28.656371  233919 network_create.go:287] error running [docker network inspect embed-certs-296314]: docker network inspect embed-certs-296314: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-296314 not found
	I1019 17:33:28.656385  233919 network_create.go:289] output of [docker network inspect embed-certs-296314]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-296314 not found
	
	** /stderr **
	I1019 17:33:28.656476  233919 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:28.678243  233919 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:33:28.678620  233919 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:33:28.679027  233919 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:33:28.679294  233919 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3b07775101cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8a:d8:e7:d0:b2:4a} reservation:<nil>}
	I1019 17:33:28.679689  233919 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10440}
	I1019 17:33:28.679716  233919 network_create.go:124] attempt to create docker network embed-certs-296314 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:33:28.679777  233919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-296314 embed-certs-296314
	I1019 17:33:28.750485  233919 network_create.go:108] docker network embed-certs-296314 192.168.85.0/24 created
	I1019 17:33:28.750519  233919 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-296314" container
	I1019 17:33:28.750791  233919 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:33:28.765359  233919 cli_runner.go:164] Run: docker volume create embed-certs-296314 --label name.minikube.sigs.k8s.io=embed-certs-296314 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:33:28.783905  233919 oci.go:103] Successfully created a docker volume embed-certs-296314
	I1019 17:33:28.783990  233919 cli_runner.go:164] Run: docker run --rm --name embed-certs-296314-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-296314 --entrypoint /usr/bin/test -v embed-certs-296314:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:33:29.413935  233919 oci.go:107] Successfully prepared a docker volume embed-certs-296314
	I1019 17:33:29.413971  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:29.413990  233919 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:33:29.414068  233919 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-296314:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:33:31.585792  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:33:31.631386  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:33:31.649292  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:33:31.649320  232207 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:33:31.736853  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:33:31.736892  232207 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:33:31.847910  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:33:31.847943  232207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:33:31.939519  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:33:31.939591  232207 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:33:32.022269  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:33:32.022310  232207 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:33:32.048604  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:33:32.048647  232207 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:33:32.076458  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:33:32.076495  232207 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:33:32.097066  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:33:32.097138  232207 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:33:32.120250  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:33:35.034059  233919 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-296314:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.619942329s)
	I1019 17:33:35.034088  233919 kic.go:203] duration metric: took 5.620094577s to extract preloaded images to volume ...
	W1019 17:33:35.034218  233919 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:33:35.034322  233919 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:33:35.136334  233919 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-296314 --name embed-certs-296314 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-296314 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-296314 --network embed-certs-296314 --ip 192.168.85.2 --volume embed-certs-296314:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:33:35.568393  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Running}}
	I1019 17:33:35.602776  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:35.638678  233919 cli_runner.go:164] Run: docker exec embed-certs-296314 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:33:35.695120  233919 oci.go:144] the created container "embed-certs-296314" has a running status.
	I1019 17:33:35.695160  233919 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa...
	I1019 17:33:36.041962  233919 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:33:36.068712  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:36.095625  233919 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:33:36.095654  233919 kic_runner.go:114] Args: [docker exec --privileged embed-certs-296314 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:33:36.164819  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:36.200901  233919 machine.go:94] provisionDockerMachine start ...
	I1019 17:33:36.200998  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:36.236320  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:36.236640  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:36.236649  233919 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:33:36.237219  233919 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50948->127.0.0.1:33103: read: connection reset by peer
	I1019 17:33:37.722053  232207 node_ready.go:49] node "no-preload-038781" is "Ready"
	I1019 17:33:37.722079  232207 node_ready.go:38] duration metric: took 6.160426066s for node "no-preload-038781" to be "Ready" ...
	I1019 17:33:37.722092  232207 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:33:37.722152  232207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:33:37.944654  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.358826256s)
	I1019 17:33:39.538757  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.907336317s)
	I1019 17:33:39.538877  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.418546923s)
	I1019 17:33:39.539020  232207 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.81685762s)
	I1019 17:33:39.539040  232207 api_server.go:72] duration metric: took 8.508901382s to wait for apiserver process to appear ...
	I1019 17:33:39.539048  232207 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:33:39.539069  232207 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:33:39.541856  232207 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-038781 addons enable metrics-server
	
	I1019 17:33:39.544679  232207 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1019 17:33:39.548725  232207 addons.go:515] duration metric: took 8.518225186s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1019 17:33:39.550996  232207 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:33:39.553253  232207 api_server.go:141] control plane version: v1.34.1
	I1019 17:33:39.553282  232207 api_server.go:131] duration metric: took 14.224244ms to wait for apiserver health ...
	I1019 17:33:39.553292  232207 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:33:39.557956  232207 system_pods.go:59] 8 kube-system pods found
	I1019 17:33:39.558002  232207 system_pods.go:61] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:33:39.558011  232207 system_pods.go:61] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:33:39.558017  232207 system_pods.go:61] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:33:39.558025  232207 system_pods.go:61] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:33:39.558033  232207 system_pods.go:61] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:33:39.558046  232207 system_pods.go:61] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:33:39.558056  232207 system_pods.go:61] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:33:39.558061  232207 system_pods.go:61] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:33:39.558074  232207 system_pods.go:74] duration metric: took 4.775581ms to wait for pod list to return data ...
	I1019 17:33:39.558082  232207 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:33:39.561639  232207 default_sa.go:45] found service account: "default"
	I1019 17:33:39.561666  232207 default_sa.go:55] duration metric: took 3.574103ms for default service account to be created ...
	I1019 17:33:39.561676  232207 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:33:39.565301  232207 system_pods.go:86] 8 kube-system pods found
	I1019 17:33:39.565338  232207 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:33:39.565347  232207 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:33:39.565352  232207 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:33:39.565359  232207 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:33:39.565367  232207 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:33:39.565373  232207 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:33:39.565389  232207 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:33:39.565397  232207 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:33:39.565405  232207 system_pods.go:126] duration metric: took 3.72238ms to wait for k8s-apps to be running ...
	I1019 17:33:39.565413  232207 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:33:39.565472  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:33:39.580586  232207 system_svc.go:56] duration metric: took 15.16245ms WaitForService to wait for kubelet
	I1019 17:33:39.580609  232207 kubeadm.go:587] duration metric: took 8.550469377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:33:39.580628  232207 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:33:39.584451  232207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:33:39.584480  232207 node_conditions.go:123] node cpu capacity is 2
	I1019 17:33:39.584491  232207 node_conditions.go:105] duration metric: took 3.857094ms to run NodePressure ...
	I1019 17:33:39.584503  232207 start.go:242] waiting for startup goroutines ...
	I1019 17:33:39.584511  232207 start.go:247] waiting for cluster config update ...
	I1019 17:33:39.584521  232207 start.go:256] writing updated cluster config ...
	I1019 17:33:39.584803  232207 ssh_runner.go:195] Run: rm -f paused
	I1019 17:33:39.589618  232207 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:33:39.593812  232207 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:39.402619  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:33:39.402685  233919 ubuntu.go:182] provisioning hostname "embed-certs-296314"
	I1019 17:33:39.402778  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:39.439111  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:39.439411  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:39.439423  233919 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-296314 && echo "embed-certs-296314" | sudo tee /etc/hostname
	I1019 17:33:39.616820  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:33:39.616944  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:39.641738  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:39.642053  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:39.642076  233919 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-296314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-296314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-296314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:33:39.806881  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:33:39.806907  233919 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:33:39.806926  233919 ubuntu.go:190] setting up certificates
	I1019 17:33:39.806936  233919 provision.go:84] configureAuth start
	I1019 17:33:39.807005  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:39.829328  233919 provision.go:143] copyHostCerts
	I1019 17:33:39.829399  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:33:39.829413  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:33:39.829492  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:33:39.829588  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:33:39.829599  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:33:39.829629  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:33:39.829683  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:33:39.829692  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:33:39.829721  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:33:39.829783  233919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.embed-certs-296314 san=[127.0.0.1 192.168.85.2 embed-certs-296314 localhost minikube]
	I1019 17:33:41.062833  233919 provision.go:177] copyRemoteCerts
	I1019 17:33:41.062922  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:33:41.062971  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.083535  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:41.202275  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:33:41.224325  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:33:41.244324  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:33:41.268898  233919 provision.go:87] duration metric: took 1.461939276s to configureAuth
	I1019 17:33:41.268968  233919 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:33:41.269169  233919 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:41.269273  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.288644  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:41.288949  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:41.288974  233919 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:33:41.671524  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:33:41.671546  233919 machine.go:97] duration metric: took 5.470625492s to provisionDockerMachine
	I1019 17:33:41.671555  233919 client.go:174] duration metric: took 13.056148544s to LocalClient.Create
	I1019 17:33:41.671568  233919 start.go:167] duration metric: took 13.0562256s to libmachine.API.Create "embed-certs-296314"
	I1019 17:33:41.671575  233919 start.go:293] postStartSetup for "embed-certs-296314" (driver="docker")
	I1019 17:33:41.671585  233919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:33:41.671648  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:33:41.671687  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.690040  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:41.800857  233919 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:33:41.805164  233919 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:33:41.805193  233919 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:33:41.805208  233919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:33:41.805277  233919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:33:41.805372  233919 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:33:41.805507  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:33:41.823047  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:41.860096  233919 start.go:296] duration metric: took 188.503291ms for postStartSetup
	I1019 17:33:41.860483  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:41.884204  233919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:33:41.884505  233919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:33:41.884551  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.912206  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.014639  233919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:33:42.027521  233919 start.go:128] duration metric: took 13.416322745s to createHost
	I1019 17:33:42.027564  233919 start.go:83] releasing machines lock for "embed-certs-296314", held for 13.416490198s
	I1019 17:33:42.027684  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:42.045796  233919 ssh_runner.go:195] Run: cat /version.json
	I1019 17:33:42.045858  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:42.046102  233919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:33:42.046167  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:42.068974  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.088084  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.200786  233919 ssh_runner.go:195] Run: systemctl --version
	I1019 17:33:42.298583  233919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:33:42.340777  233919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:33:42.344979  233919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:33:42.345092  233919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:33:42.377694  233919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:33:42.377802  233919 start.go:496] detecting cgroup driver to use...
	I1019 17:33:42.377868  233919 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:33:42.377949  233919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:33:42.404791  233919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:33:42.420843  233919 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:33:42.420951  233919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:33:42.442307  233919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:33:42.473465  233919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:33:42.636175  233919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:33:42.769488  233919 docker.go:234] disabling docker service ...
	I1019 17:33:42.769559  233919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:33:42.815639  233919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:33:42.843855  233919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:33:43.038855  233919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:33:43.222919  233919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:33:43.245331  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:33:43.274123  233919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:33:43.274227  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.289339  233919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:33:43.289445  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.311465  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.330812  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.343257  233919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:33:43.354044  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.371089  233919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.389293  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.399585  233919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:33:43.408164  233919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:33:43.416337  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:43.570864  233919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:33:43.773207  233919 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:33:43.773334  233919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:33:43.784739  233919 start.go:564] Will wait 60s for crictl version
	I1019 17:33:43.784901  233919 ssh_runner.go:195] Run: which crictl
	I1019 17:33:43.789752  233919 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:33:43.821496  233919 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:33:43.821632  233919 ssh_runner.go:195] Run: crio --version
	I1019 17:33:43.860255  233919 ssh_runner.go:195] Run: crio --version
	I1019 17:33:43.901042  233919 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 17:33:41.600365  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:43.608114  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:46.101413  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:43.904077  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:43.922110  233919 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:33:43.926519  233919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:43.937885  233919 kubeadm.go:884] updating cluster {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:33:43.937996  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:43.938058  233919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:43.978349  233919 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:43.978376  233919 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:33:43.978447  233919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:44.015390  233919 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:44.015416  233919 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:33:44.015425  233919 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:33:44.015728  233919 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-296314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:33:44.015843  233919 ssh_runner.go:195] Run: crio config
	I1019 17:33:44.097269  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:33:44.097289  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:44.097337  233919 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:33:44.097361  233919 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-296314 NodeName:embed-certs-296314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:33:44.097542  233919 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-296314"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:33:44.097635  233919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:33:44.108335  233919 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:33:44.108436  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:33:44.117970  233919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 17:33:44.133069  233919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:33:44.148514  233919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 17:33:44.164235  233919 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:33:44.168513  233919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:44.179551  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:44.349997  233919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:44.377036  233919 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314 for IP: 192.168.85.2
	I1019 17:33:44.377064  233919 certs.go:195] generating shared ca certs ...
	I1019 17:33:44.377080  233919 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:44.377301  233919 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:33:44.377376  233919 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:33:44.377393  233919 certs.go:257] generating profile certs ...
	I1019 17:33:44.377460  233919 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key
	I1019 17:33:44.377478  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt with IP's: []
	I1019 17:33:45.427204  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt ...
	I1019 17:33:45.427267  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt: {Name:mk9908ee427c9ddcdaffc981e590bcb4b67e75bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.427526  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key ...
	I1019 17:33:45.427544  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key: {Name:mk3d0068edc84eda9125979974dc006ec3e7d3de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.427659  233919 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8
	I1019 17:33:45.427692  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:33:45.890216  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 ...
	I1019 17:33:45.890247  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8: {Name:mk3c2072648c516b64b7c1f4381726280c111d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.890431  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8 ...
	I1019 17:33:45.890447  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8: {Name:mkfee54346a4eed5c6fd19c07a48a7b2f44bee05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.890547  233919 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt
	I1019 17:33:45.890640  233919 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key
	I1019 17:33:45.890706  233919 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key
	I1019 17:33:45.890729  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt with IP's: []
	I1019 17:33:46.173874  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt ...
	I1019 17:33:46.173903  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt: {Name:mk3b45a1a9b9dd0e89b7a391cef05651ed0f1117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:46.174087  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key ...
	I1019 17:33:46.174102  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key: {Name:mk6ffe6968d019c9233d25bf1713984cc3d5332d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:46.174291  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:33:46.174337  233919 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:33:46.174351  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:33:46.174378  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:33:46.174415  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:33:46.174446  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:33:46.174491  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:46.175140  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:33:46.195127  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:33:46.221170  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:33:46.243675  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:33:46.268214  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:33:46.317488  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:33:46.350153  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:33:46.383588  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:33:46.426247  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:33:46.475242  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:33:46.509757  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:33:46.540873  233919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:33:46.564338  233919 ssh_runner.go:195] Run: openssl version
	I1019 17:33:46.570632  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:33:46.579652  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.583866  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.583984  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.639672  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:33:46.649831  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:33:46.659896  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.664159  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.664272  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.709834  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:33:46.720413  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:33:46.730359  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.735120  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.735268  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.784686  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:33:46.798785  233919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:33:46.806410  233919 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:33:46.806465  233919 kubeadm.go:401] StartCluster: {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:46.806530  233919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:33:46.806692  233919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:33:46.860679  233919 cri.go:89] found id: ""
	I1019 17:33:46.860754  233919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:33:46.878708  233919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:33:46.888471  233919 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:33:46.888536  233919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:33:46.900113  233919 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:33:46.900174  233919 kubeadm.go:158] found existing configuration files:
	
	I1019 17:33:46.900271  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:33:46.911442  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:33:46.911566  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:33:46.920842  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:33:46.930297  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:33:46.930388  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:33:46.940000  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:33:46.950192  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:33:46.950265  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:33:46.961666  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:33:46.972242  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:33:46.972376  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:33:46.982097  233919 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:33:47.035764  233919 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:33:47.036005  233919 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:33:47.093504  233919 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:33:47.093626  233919 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:33:47.093700  233919 kubeadm.go:319] OS: Linux
	I1019 17:33:47.093776  233919 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:33:47.093852  233919 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:33:47.093937  233919 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:33:47.094007  233919 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:33:47.094067  233919 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:33:47.094124  233919 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:33:47.094194  233919 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:33:47.094272  233919 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:33:47.094354  233919 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:33:47.218784  233919 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:33:47.218944  233919 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:33:47.219082  233919 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:33:47.227195  233919 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:33:47.234259  233919 out.go:252]   - Generating certificates and keys ...
	I1019 17:33:47.234393  233919 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:33:47.234493  233919 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1019 17:33:48.599815  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:51.101852  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:48.485745  233919 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:33:48.993077  233919 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:33:49.243083  233919 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:33:50.041877  233919 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:33:50.602928  233919 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:33:50.603273  233919 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-296314 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:33:51.182615  233919 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:33:51.183227  233919 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-296314 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:33:51.359945  233919 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:33:52.210127  233919 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:33:53.121595  233919 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:33:53.122069  233919 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:33:53.478192  233919 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:33:53.955905  233919 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:33:54.284282  233919 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:33:54.338901  233919 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:33:55.172738  233919 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:33:55.173839  233919 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:33:55.181637  233919 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1019 17:33:53.600096  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:55.609783  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:55.185025  233919 out.go:252]   - Booting up control plane ...
	I1019 17:33:55.185144  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:33:55.185238  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:33:55.186173  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:33:55.220192  233919 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:33:55.220309  233919 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:33:55.230083  233919 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:33:55.230192  233919 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:33:55.230239  233919 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:33:55.427055  233919 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:33:55.427195  233919 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:33:56.935143  233919 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.508123673s
	I1019 17:33:56.942450  233919 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:33:56.942609  233919 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 17:33:56.943127  233919 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:33:56.943223  233919 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 17:33:58.102080  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:00.599515  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:02.113991  233919 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.170098705s
	I1019 17:34:02.469245  233919 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.52414923s
	I1019 17:34:03.948960  233919 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002467737s
	I1019 17:34:03.969603  233919 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:34:03.982430  233919 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:34:03.998475  233919 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:34:03.998761  233919 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-296314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:34:04.017399  233919 kubeadm.go:319] [bootstrap-token] Using token: eir7xu.5dylgzny1ipwrk2v
	I1019 17:34:04.020403  233919 out.go:252]   - Configuring RBAC rules ...
	I1019 17:34:04.020551  233919 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:34:04.025344  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:34:04.036411  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:34:04.040617  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:34:04.044980  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:34:04.053046  233919 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:34:04.354041  233919 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:34:04.856590  233919 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:34:05.352949  233919 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:34:05.354033  233919 kubeadm.go:319] 
	I1019 17:34:05.354108  233919 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:34:05.354114  233919 kubeadm.go:319] 
	I1019 17:34:05.354194  233919 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:34:05.354199  233919 kubeadm.go:319] 
	I1019 17:34:05.354225  233919 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:34:05.354286  233919 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:34:05.354345  233919 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:34:05.354350  233919 kubeadm.go:319] 
	I1019 17:34:05.354406  233919 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:34:05.354410  233919 kubeadm.go:319] 
	I1019 17:34:05.354459  233919 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:34:05.354467  233919 kubeadm.go:319] 
	I1019 17:34:05.354563  233919 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:34:05.354643  233919 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:34:05.354714  233919 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:34:05.354718  233919 kubeadm.go:319] 
	I1019 17:34:05.354805  233919 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:34:05.354884  233919 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:34:05.354889  233919 kubeadm.go:319] 
	I1019 17:34:05.354975  233919 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eir7xu.5dylgzny1ipwrk2v \
	I1019 17:34:05.355082  233919 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:34:05.355103  233919 kubeadm.go:319] 	--control-plane 
	I1019 17:34:05.355108  233919 kubeadm.go:319] 
	I1019 17:34:05.355204  233919 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:34:05.355209  233919 kubeadm.go:319] 
	I1019 17:34:05.355294  233919 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eir7xu.5dylgzny1ipwrk2v \
	I1019 17:34:05.355399  233919 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:34:05.360433  233919 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:34:05.360674  233919 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:34:05.360787  233919 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:34:05.360807  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:34:05.360818  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:34:05.362143  233919 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 17:34:02.600263  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:04.602092  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:05.363435  233919 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:34:05.367630  233919 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:34:05.367647  233919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:34:05.384603  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:34:05.743295  233919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:34:05.743450  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:05.743587  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-296314 minikube.k8s.io/updated_at=2025_10_19T17_34_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=embed-certs-296314 minikube.k8s.io/primary=true
	I1019 17:34:05.898013  233919 ops.go:34] apiserver oom_adj: -16
	I1019 17:34:05.898112  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:06.398229  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:06.899012  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:07.399142  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:07.898660  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:08.398574  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:08.898175  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:09.398425  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:09.898674  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:10.398707  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:10.898511  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:11.024267  233919 kubeadm.go:1114] duration metric: took 5.28088454s to wait for elevateKubeSystemPrivileges
	I1019 17:34:11.024298  233919 kubeadm.go:403] duration metric: took 24.217836324s to StartCluster
	I1019 17:34:11.024315  233919 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:11.024375  233919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:34:11.025672  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:11.025899  233919 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:34:11.026014  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:34:11.026258  233919 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:34:11.026299  233919 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:34:11.026360  233919 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-296314"
	I1019 17:34:11.026379  233919 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-296314"
	I1019 17:34:11.026404  233919 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:34:11.026665  233919 addons.go:70] Setting default-storageclass=true in profile "embed-certs-296314"
	I1019 17:34:11.026692  233919 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-296314"
	I1019 17:34:11.027018  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.027463  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.030766  233919 out.go:179] * Verifying Kubernetes components...
	I1019 17:34:11.034104  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:34:11.072408  233919 addons.go:239] Setting addon default-storageclass=true in "embed-certs-296314"
	I1019 17:34:11.072461  233919 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:34:11.072911  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.073964  233919 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 17:34:07.099829  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:09.100806  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:11.106805  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:11.077028  233919 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:34:11.077052  233919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:34:11.077116  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:34:11.119502  233919 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:34:11.119523  233919 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:34:11.119672  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:34:11.120486  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:34:11.148546  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:34:11.341780  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:34:11.399391  233919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:34:11.415718  233919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:34:11.506441  233919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:34:11.838177  233919 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:34:11.841583  233919 node_ready.go:35] waiting up to 6m0s for node "embed-certs-296314" to be "Ready" ...
	I1019 17:34:12.145017  233919 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 17:34:12.148755  233919 addons.go:515] duration metric: took 1.122434128s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1019 17:34:12.342872  233919 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-296314" context rescaled to 1 replicas
	I1019 17:34:13.099952  232207 pod_ready.go:94] pod "coredns-66bc5c9577-6k8tn" is "Ready"
	I1019 17:34:13.099982  232207 pod_ready.go:86] duration metric: took 33.506145758s for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.102952  232207 pod_ready.go:83] waiting for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.108799  232207 pod_ready.go:94] pod "etcd-no-preload-038781" is "Ready"
	I1019 17:34:13.108829  232207 pod_ready.go:86] duration metric: took 5.846406ms for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.111670  232207 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.116557  232207 pod_ready.go:94] pod "kube-apiserver-no-preload-038781" is "Ready"
	I1019 17:34:13.116584  232207 pod_ready.go:86] duration metric: took 4.886293ms for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.119207  232207 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.298000  232207 pod_ready.go:94] pod "kube-controller-manager-no-preload-038781" is "Ready"
	I1019 17:34:13.298030  232207 pod_ready.go:86] duration metric: took 178.7987ms for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.498272  232207 pod_ready.go:83] waiting for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.897815  232207 pod_ready.go:94] pod "kube-proxy-2n5k9" is "Ready"
	I1019 17:34:13.897841  232207 pod_ready.go:86] duration metric: took 399.54099ms for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.098177  232207 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.497786  232207 pod_ready.go:94] pod "kube-scheduler-no-preload-038781" is "Ready"
	I1019 17:34:14.497813  232207 pod_ready.go:86] duration metric: took 399.606849ms for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.497825  232207 pod_ready.go:40] duration metric: took 34.908116786s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:34:14.561744  232207 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:34:14.564947  232207 out.go:179] * Done! kubectl is now configured to use "no-preload-038781" cluster and "default" namespace by default
	W1019 17:34:13.845049  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:15.845245  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:18.345153  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:20.345478  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:22.844605  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:25.344435  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:27.346392  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 19 17:34:05 no-preload-038781 crio[653]: time="2025-10-19T17:34:05.41360945Z" level=info msg="Removed container 5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn/dashboard-metrics-scraper" id=0c97ccf4-1ed4-4b8a-ad37-013d59b6a280 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:34:08 no-preload-038781 conmon[1143]: conmon 7295d170c9f1c652ed83 <ninfo>: container 1145 exited with status 1
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.413247901Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f106bf55-df6e-4f8d-b19f-f20b17e67f01 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.417438676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=89bf5b58-39d6-4565-8726-531f4a35f077 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.41944477Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=68404b7a-e19f-4e47-9369-a94ec9da6477 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.419769977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.42611861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426349859Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d927fe5c4c68988d4761a004c9e449a8cfaabfc747301ed2f44d7fcd1db53fba/merged/etc/passwd: no such file or directory"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426377666Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d927fe5c4c68988d4761a004c9e449a8cfaabfc747301ed2f44d7fcd1db53fba/merged/etc/group: no such file or directory"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426659599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.456697575Z" level=info msg="Created container d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8: kube-system/storage-provisioner/storage-provisioner" id=68404b7a-e19f-4e47-9369-a94ec9da6477 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.457696712Z" level=info msg="Starting container: d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8" id=3a27f68b-5883-44b4-aeb9-61ccd8884f87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.459727627Z" level=info msg="Started container" PID=1641 containerID=d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8 description=kube-system/storage-provisioner/storage-provisioner id=3a27f68b-5883-44b4-aeb9-61ccd8884f87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=007fc521ae5852077d04214ae39535fac08cd0f3cb3aae5f177cecd6b1911e9e
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.813053101Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820151152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820189249Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820214825Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823385186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823419739Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823444395Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826451538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826492802Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826516967Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.829601362Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.829634757Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d1ae7afadcdd6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   007fc521ae585       storage-provisioner                          kube-system
	4e48a039cc1f5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   4c271ef2cef53       dashboard-metrics-scraper-6ffb444bf9-rbgzn   kubernetes-dashboard
	8716b30ad8495       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   338426eafe947       kubernetes-dashboard-855c9754f9-qdn5q        kubernetes-dashboard
	1c6f01729c8ea       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   4158e340d188b       coredns-66bc5c9577-6k8tn                     kube-system
	7295d170c9f1c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   007fc521ae585       storage-provisioner                          kube-system
	aa2e6a947fb42       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   d19ae942ad5e2       kube-proxy-2n5k9                             kube-system
	1dfcb1be4b5bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   c61c3081d54cf       busybox                                      default
	63a21cb0dd8ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   5361b5de5552d       kindnet-t6qjz                                kube-system
	4ecdc75b36a4c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   3f65b88bb435f       kube-controller-manager-no-preload-038781    kube-system
	2f46f60d6de64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   80bb29e47dc3c       etcd-no-preload-038781                       kube-system
	0d0e37aed3838       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   575db676691b8       kube-scheduler-no-preload-038781             kube-system
	536e5d3cd6aab       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   2c9f9fbcb5d21       kube-apiserver-no-preload-038781             kube-system
	
	
	==> coredns [1c6f01729c8ea65f68f7c74cd0edce25f7839aa8e906e5eaaf9f59dea56c3592] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48118 - 21177 "HINFO IN 1669950668549980651.2139323910193721934. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025776242s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-038781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-038781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-038781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:32:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-038781
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:34:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-038781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f7908916-dc6b-4011-8ad7-c40cd54a41fa
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-6k8tn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-038781                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-t6qjz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-038781              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-038781     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-2n5k9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-038781              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rbgzn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qdn5q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 110s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  117s                 kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s                 kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-038781 event: Registered Node no-preload-038781 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-038781 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                  node-controller  Node no-preload-038781 event: Registered Node no-preload-038781 in Controller
	
	
	==> dmesg <==
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4] <==
	{"level":"warn","ts":"2025-10-19T17:33:35.987241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.066440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.093170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.153289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.197231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.247305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.277827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.302748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.321093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.356859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.378808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.424741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.439438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.466787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.490453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.549457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.552911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.585998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.640299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.670647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.706892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.729909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.849209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.940617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48360","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:34:29 up  1:16,  0 user,  load average: 3.69, 3.95, 3.46
	Linux no-preload-038781 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63a21cb0dd8ac64312c63edbf6eba4361cba29f0413fe4f5a288ccef35e3d0a1] <==
	I1019 17:33:38.596169       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:33:38.596729       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:33:38.596863       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:33:38.596875       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:33:38.596888       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:33:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:33:38.811197       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:33:38.811297       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:33:38.811330       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:33:38.814474       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:34:08.808589       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:34:08.809721       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:34:08.811068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:34:08.811180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 17:34:10.314504       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:34:10.314582       1 metrics.go:72] Registering metrics
	I1019 17:34:10.314629       1 controller.go:711] "Syncing nftables rules"
	I1019 17:34:18.812720       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:34:18.812775       1 main.go:301] handling current node
	I1019 17:34:28.814833       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:34:28.814882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99] <==
	I1019 17:33:37.954672       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:33:37.955681       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:33:37.955700       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:33:37.956120       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:33:37.956131       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:33:37.956137       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:33:37.956143       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:33:37.960445       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:33:37.960475       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:33:37.960480       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:33:37.960772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:33:37.960808       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:33:37.997912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 17:33:38.073247       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:33:38.073646       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:33:38.544527       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:33:38.797237       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:33:38.868375       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:33:38.910013       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:33:38.925929       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:33:39.005044       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.121.64"}
	I1019 17:33:39.023495       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.40.146"}
	I1019 17:33:41.461118       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:33:41.560461       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:33:41.662693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0] <==
	I1019 17:33:41.158739       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:33:41.158745       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:33:41.160746       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:33:41.163302       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:33:41.163314       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:33:41.163376       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:33:41.163403       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:33:41.163415       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:33:41.163420       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:33:41.163527       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:33:41.163584       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:33:41.166340       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:33:41.167485       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:33:41.169692       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:33:41.173176       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:33:41.177570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:33:41.177635       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:33:41.177666       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:33:41.184023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:33:41.189006       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:33:41.193515       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:33:41.193776       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:33:41.204519       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:33:41.208709       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:33:41.213119       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [aa2e6a947fb42538c3f95b4e424f09d0784485f208dbe2872cdb5a5c87988222] <==
	I1019 17:33:38.918121       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:33:39.071768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:33:39.180874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:33:39.181499       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:33:39.181647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:33:39.241616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:33:39.241671       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:33:39.262888       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:33:39.263258       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:33:39.263273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:33:39.266077       1 config.go:200] "Starting service config controller"
	I1019 17:33:39.266095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:33:39.299456       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:33:39.299486       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:33:39.299516       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:33:39.299521       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:33:39.327685       1 config.go:309] "Starting node config controller"
	I1019 17:33:39.327706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:33:39.327714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:33:39.366347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:33:39.414930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:33:39.415254       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef] <==
	I1019 17:33:34.585114       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:33:37.721869       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:33:37.726646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:33:37.726662       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:33:37.726670       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:33:37.898360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:33:37.900726       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:33:37.916184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:33:37.916218       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:33:37.917063       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:33:37.917098       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:33:38.019824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:33:41 no-preload-038781 kubelet[773]: I1019 17:33:41.967974     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7eb3b8ac-a1b4-4677-8411-2b730be7c599-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qdn5q\" (UID: \"7eb3b8ac-a1b4-4677-8411-2b730be7c599\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q"
	Oct 19 17:33:41 no-preload-038781 kubelet[773]: I1019 17:33:41.968047     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2hd\" (UniqueName: \"kubernetes.io/projected/7eb3b8ac-a1b4-4677-8411-2b730be7c599-kube-api-access-vg2hd\") pod \"kubernetes-dashboard-855c9754f9-qdn5q\" (UID: \"7eb3b8ac-a1b4-4677-8411-2b730be7c599\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q"
	Oct 19 17:33:42 no-preload-038781 kubelet[773]: W1019 17:33:42.162683     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/crio-4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6 WatchSource:0}: Error finding container 4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6: Status 404 returned error can't find the container with id 4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6
	Oct 19 17:33:42 no-preload-038781 kubelet[773]: I1019 17:33:42.601349     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:33:49 no-preload-038781 kubelet[773]: I1019 17:33:49.350049     773 scope.go:117] "RemoveContainer" containerID="2a2c8950c24dc7a570645bde8f9d566c54a6709bfacfc00a45a04d20ca8a3fad"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: I1019 17:33:50.354612     773 scope.go:117] "RemoveContainer" containerID="2a2c8950c24dc7a570645bde8f9d566c54a6709bfacfc00a45a04d20ca8a3fad"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: I1019 17:33:50.354915     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: E1019 17:33:50.355133     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:33:51 no-preload-038781 kubelet[773]: I1019 17:33:51.361500     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:51 no-preload-038781 kubelet[773]: E1019 17:33:51.361665     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:33:52 no-preload-038781 kubelet[773]: I1019 17:33:52.360185     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:52 no-preload-038781 kubelet[773]: E1019 17:33:52.360343     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.097164     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.395618     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.395812     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: E1019 17:34:05.396044     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.428416     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q" podStartSLOduration=12.063826499 podStartE2EDuration="24.428397793s" podCreationTimestamp="2025-10-19 17:33:41 +0000 UTC" firstStartedPulling="2025-10-19 17:33:42.462831547 +0000 UTC m=+12.699255866" lastFinishedPulling="2025-10-19 17:33:54.827402841 +0000 UTC m=+25.063827160" observedRunningTime="2025-10-19 17:33:55.401332825 +0000 UTC m=+25.637757161" watchObservedRunningTime="2025-10-19 17:34:05.428397793 +0000 UTC m=+35.664822120"
	Oct 19 17:34:09 no-preload-038781 kubelet[773]: I1019 17:34:09.411845     773 scope.go:117] "RemoveContainer" containerID="7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda"
	Oct 19 17:34:12 no-preload-038781 kubelet[773]: I1019 17:34:12.075064     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:12 no-preload-038781 kubelet[773]: E1019 17:34:12.075735     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:25 no-preload-038781 kubelet[773]: I1019 17:34:25.097127     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:25 no-preload-038781 kubelet[773]: E1019 17:34:25.097349     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:26 no-preload-038781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:34:26 no-preload-038781 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:34:26 no-preload-038781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8716b30ad849506fd3f8f4715e585b04ced2a15cf9ed5a6881825f2a54647510] <==
	2025/10/19 17:33:54 Starting overwatch
	2025/10/19 17:33:54 Using namespace: kubernetes-dashboard
	2025/10/19 17:33:54 Using in-cluster config to connect to apiserver
	2025/10/19 17:33:54 Using secret token for csrf signing
	2025/10/19 17:33:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:33:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:33:54 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:33:54 Generating JWE encryption key
	2025/10/19 17:33:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:33:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:33:55 Initializing JWE encryption key from synchronized object
	2025/10/19 17:33:55 Creating in-cluster Sidecar client
	2025/10/19 17:33:55 Serving insecurely on HTTP port: 9090
	2025/10/19 17:33:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:34:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda] <==
	I1019 17:33:38.526061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:34:08.527841       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8] <==
	I1019 17:34:09.515535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:34:09.532223       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:34:09.532293       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:34:09.538283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:12.993856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:17.253839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:20.852047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:23.905459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.928489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.936154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:26.936455       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:34:26.936636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734!
	I1019 17:34:26.937238       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3e86efa-396c-4e58-879b-5827a6d5b481", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734 became leader
	W1019 17:34:26.942009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.953325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:27.037304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734!
	W1019 17:34:28.956301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:28.961222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-038781 -n no-preload-038781: exit status 2 (371.167572ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-038781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-038781
helpers_test.go:243: (dbg) docker inspect no-preload-038781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	        "Created": "2025-10-19T17:31:51.406561575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:33:22.007891381Z",
	            "FinishedAt": "2025-10-19T17:33:20.927764282Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/hosts",
	        "LogPath": "/var/lib/docker/containers/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc-json.log",
	        "Name": "/no-preload-038781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-038781:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-038781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc",
	                "LowerDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39266e0363fe6cee7274d131589d97093351b2062aaecb6fccd6fbeeb1da717f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-038781",
	                "Source": "/var/lib/docker/volumes/no-preload-038781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-038781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-038781",
	                "name.minikube.sigs.k8s.io": "no-preload-038781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b628041d5d4a3e0351fb5578481d9491ab91da8c6997622c33fc2966be9092a8",
	            "SandboxKey": "/var/run/docker/netns/b628041d5d4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-038781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:66:61:ca:41:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b07775101cd68c8ddd9de09f237af6ede6d8644dfb4bb5013ca32815c3f150a",
	                    "EndpointID": "64ae0bcdb69a4f7f287915acb47c7230dd64c468a7d59c619d01fd40a797fab4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-038781",
	                        "4de6d765b1ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781: exit status 2 (347.24482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-038781 logs -n 25
E1019 17:34:31.771982    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-038781 logs -n 25: (1.372626281s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-953581 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo containerd config dump                                                                                                                                                                                                  │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581          │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363 │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314     │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781      │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:33:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:33:28.277182  233919 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:33:28.277335  233919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:28.277341  233919 out.go:374] Setting ErrFile to fd 2...
	I1019 17:33:28.277346  233919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:33:28.277617  233919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:33:28.278089  233919 out.go:368] Setting JSON to false
	I1019 17:33:28.278997  233919 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4556,"bootTime":1760890652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:33:28.279071  233919 start.go:143] virtualization:  
	I1019 17:33:28.282664  233919 out.go:179] * [embed-certs-296314] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:33:28.285888  233919 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:33:28.285955  233919 notify.go:221] Checking for updates...
	I1019 17:33:28.291964  233919 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:33:28.294858  233919 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:28.298600  233919 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:33:28.304040  233919 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:33:28.306995  233919 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:33:28.310377  233919 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:28.310478  233919 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:33:28.346666  233919 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:33:28.346793  233919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:28.460375  233919 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:33:28.424716141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:28.460485  233919 docker.go:319] overlay module found
	I1019 17:33:28.463638  233919 out.go:179] * Using the docker driver based on user configuration
	I1019 17:33:28.466605  233919 start.go:309] selected driver: docker
	I1019 17:33:28.466628  233919 start.go:930] validating driver "docker" against <nil>
	I1019 17:33:28.466641  233919 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:33:28.467352  233919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:33:28.563983  233919 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:33:28.553551864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:33:28.564131  233919 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:33:28.564350  233919 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:33:28.567264  233919 out.go:179] * Using Docker driver with root privileges
	I1019 17:33:28.570129  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:33:28.570190  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:28.570197  233919 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:33:28.570277  233919 start.go:353] cluster config:
	{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:28.573232  233919 out.go:179] * Starting "embed-certs-296314" primary control-plane node in "embed-certs-296314" cluster
	I1019 17:33:28.576068  233919 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:33:28.579012  233919 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:33:28.581820  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:28.581879  233919 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:33:28.581889  233919 cache.go:59] Caching tarball of preloaded images
	I1019 17:33:28.581969  233919 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:33:28.581977  233919 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:33:28.582106  233919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:33:28.582124  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json: {Name:mk36693101c8fc969669726520164b9d80aaac03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:28.582290  233919 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:33:28.610893  233919 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:33:28.610914  233919 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:33:28.610936  233919 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:33:28.610962  233919 start.go:360] acquireMachinesLock for embed-certs-296314: {Name:mkbadf116eb8b8b2fc66452f2f3b93b38bb1a004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:33:28.611063  233919 start.go:364] duration metric: took 86.573µs to acquireMachinesLock for "embed-certs-296314"
	I1019 17:33:28.611093  233919 start.go:93] Provisioning new machine with config: &{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:33:28.611182  233919 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:33:27.003704  232207 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:33:27.003732  232207 machine.go:97] duration metric: took 4.577520969s to provisionDockerMachine
	I1019 17:33:27.003762  232207 start.go:293] postStartSetup for "no-preload-038781" (driver="docker")
	I1019 17:33:27.003776  232207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:33:27.003859  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:33:27.003906  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.030344  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.148305  232207 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:33:27.152118  232207 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:33:27.152144  232207 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:33:27.152155  232207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:33:27.152231  232207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:33:27.152306  232207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:33:27.152404  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:33:27.161123  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:27.182062  232207 start.go:296] duration metric: took 178.282871ms for postStartSetup
	I1019 17:33:27.182145  232207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:33:27.182200  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.208174  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.316528  232207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:33:27.323438  232207 fix.go:56] duration metric: took 5.38097894s for fixHost
	I1019 17:33:27.323461  232207 start.go:83] releasing machines lock for "no-preload-038781", held for 5.381035581s
	I1019 17:33:27.323539  232207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-038781
	I1019 17:33:27.346441  232207 ssh_runner.go:195] Run: cat /version.json
	I1019 17:33:27.346515  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.346863  232207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:33:27.346942  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:27.383616  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.396428  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:27.487262  232207 ssh_runner.go:195] Run: systemctl --version
	I1019 17:33:27.611746  232207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:33:27.697901  232207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:33:27.703234  232207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:33:27.703302  232207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:33:27.714224  232207 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:33:27.714246  232207 start.go:496] detecting cgroup driver to use...
	I1019 17:33:27.714277  232207 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:33:27.714318  232207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:33:27.732129  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:33:27.747396  232207 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:33:27.747468  232207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:33:27.763558  232207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:33:27.779993  232207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:33:28.022154  232207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:33:28.217997  232207 docker.go:234] disabling docker service ...
	I1019 17:33:28.218085  232207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:33:28.239387  232207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:33:28.255875  232207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:33:28.436264  232207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:33:28.598736  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:33:28.612566  232207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:33:28.629979  232207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:33:28.630037  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.641347  232207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:33:28.641408  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.651326  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.663334  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.674368  232207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:33:28.684633  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.695620  232207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.714004  232207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:28.726350  232207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:33:28.737644  232207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:33:28.747093  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:28.914688  232207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:33:29.092410  232207 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:33:29.092477  232207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:33:29.098362  232207 start.go:564] Will wait 60s for crictl version
	I1019 17:33:29.098421  232207 ssh_runner.go:195] Run: which crictl
	I1019 17:33:29.102440  232207 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:33:29.135221  232207 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:33:29.135297  232207 ssh_runner.go:195] Run: crio --version
	I1019 17:33:29.194932  232207 ssh_runner.go:195] Run: crio --version
	I1019 17:33:29.236198  232207 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:33:29.239136  232207 cli_runner.go:164] Run: docker network inspect no-preload-038781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:29.261350  232207 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:33:29.265130  232207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:29.321790  232207 kubeadm.go:884] updating cluster {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:33:29.321912  232207 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:29.321951  232207 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:29.384388  232207 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:29.384409  232207 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:33:29.384417  232207 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 17:33:29.384518  232207 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-038781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:33:29.384619  232207 ssh_runner.go:195] Run: crio config
	I1019 17:33:29.472761  232207 cni.go:84] Creating CNI manager for ""
	I1019 17:33:29.472825  232207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:29.472862  232207 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:33:29.472906  232207 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-038781 NodeName:no-preload-038781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:33:29.473061  232207 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-038781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:33:29.473148  232207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:33:29.492477  232207 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:33:29.492558  232207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:33:29.504920  232207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:33:29.520982  232207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:33:29.536740  232207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 17:33:29.558231  232207 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:33:29.569275  232207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:29.581524  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:29.745251  232207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:29.761260  232207 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781 for IP: 192.168.76.2
	I1019 17:33:29.761320  232207 certs.go:195] generating shared ca certs ...
	I1019 17:33:29.761352  232207 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:29.761518  232207 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:33:29.761590  232207 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:33:29.761612  232207 certs.go:257] generating profile certs ...
	I1019 17:33:29.761730  232207 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/client.key
	I1019 17:33:29.761844  232207 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key.559c1e8d
	I1019 17:33:29.761910  232207 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key
	I1019 17:33:29.762055  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:33:29.762122  232207 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:33:29.762158  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:33:29.762208  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:33:29.762262  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:33:29.762316  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:33:29.762399  232207 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:29.763053  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:33:29.797012  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:33:29.829624  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:33:29.858887  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:33:29.885905  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:33:29.912896  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:33:29.967935  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:33:29.993770  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/no-preload-038781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:33:30.071324  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:33:30.109539  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:33:30.136824  232207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:33:30.158664  232207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:33:30.175070  232207 ssh_runner.go:195] Run: openssl version
	I1019 17:33:30.184499  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:33:30.194973  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.199385  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.199501  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:33:30.241939  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:33:30.251019  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:33:30.260429  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.268967  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.269068  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:30.311217  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:33:30.319197  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:33:30.327472  232207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.332255  232207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.332379  232207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:33:30.377517  232207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:33:30.385528  232207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:33:30.389951  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:33:30.432703  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:33:30.475365  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:33:30.543147  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:33:30.635022  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:33:30.758163  232207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:33:30.868818  232207 kubeadm.go:401] StartCluster: {Name:no-preload-038781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-038781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:30.868913  232207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:33:30.868999  232207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:33:30.948539  232207 cri.go:89] found id: "4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0"
	I1019 17:33:30.948599  232207 cri.go:89] found id: "2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4"
	I1019 17:33:30.948621  232207 cri.go:89] found id: "0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef"
	I1019 17:33:30.948642  232207 cri.go:89] found id: "536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99"
	I1019 17:33:30.948660  232207 cri.go:89] found id: ""
	I1019 17:33:30.948779  232207 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:33:30.980090  232207 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:33:30.980227  232207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:33:30.989084  232207 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:33:30.989160  232207 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:33:30.989242  232207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:33:30.997384  232207 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:33:30.997857  232207 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-038781" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:30.998011  232207 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-038781" cluster setting kubeconfig missing "no-preload-038781" context setting]
	I1019 17:33:30.998372  232207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:30.999934  232207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:33:31.028973  232207 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:33:31.029052  232207 kubeadm.go:602] duration metric: took 39.863988ms to restartPrimaryControlPlane
	I1019 17:33:31.029076  232207 kubeadm.go:403] duration metric: took 160.268431ms to StartCluster
	I1019 17:33:31.029129  232207 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:31.029210  232207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:33:31.029835  232207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:31.030087  232207 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:33:31.030435  232207 config.go:182] Loaded profile config "no-preload-038781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:31.030495  232207 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:33:31.030647  232207 addons.go:70] Setting storage-provisioner=true in profile "no-preload-038781"
	I1019 17:33:31.030666  232207 addons.go:239] Setting addon storage-provisioner=true in "no-preload-038781"
	W1019 17:33:31.030677  232207 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:33:31.030699  232207 addons.go:70] Setting dashboard=true in profile "no-preload-038781"
	I1019 17:33:31.030736  232207 addons.go:239] Setting addon dashboard=true in "no-preload-038781"
	W1019 17:33:31.030756  232207 addons.go:248] addon dashboard should already be in state true
	I1019 17:33:31.030789  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.030701  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.031317  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.031356  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.030708  232207 addons.go:70] Setting default-storageclass=true in profile "no-preload-038781"
	I1019 17:33:31.031907  232207 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-038781"
	I1019 17:33:31.032174  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.036955  232207 out.go:179] * Verifying Kubernetes components...
	I1019 17:33:31.040179  232207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:31.072891  232207 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:33:31.077971  232207 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:33:31.077994  232207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:33:31.078064  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.093532  232207 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:33:31.100404  232207 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:33:31.103284  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:33:31.103307  232207 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:33:31.103376  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.111488  232207 addons.go:239] Setting addon default-storageclass=true in "no-preload-038781"
	W1019 17:33:31.111512  232207 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:33:31.111537  232207 host.go:66] Checking if "no-preload-038781" exists ...
	I1019 17:33:31.111957  232207 cli_runner.go:164] Run: docker container inspect no-preload-038781 --format={{.State.Status}}
	I1019 17:33:31.137971  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.162078  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.163252  232207 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:33:31.163280  232207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:33:31.163341  232207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-038781
	I1019 17:33:31.189770  232207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/no-preload-038781/id_rsa Username:docker}
	I1019 17:33:31.485180  232207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:31.544968  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:33:31.544994  232207 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:33:31.561608  232207 node_ready.go:35] waiting up to 6m0s for node "no-preload-038781" to be "Ready" ...
	I1019 17:33:28.615089  233919 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:33:28.615344  233919 start.go:159] libmachine.API.Create for "embed-certs-296314" (driver="docker")
	I1019 17:33:28.615396  233919 client.go:171] LocalClient.Create starting
	I1019 17:33:28.615463  233919 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:33:28.615503  233919 main.go:143] libmachine: Decoding PEM data...
	I1019 17:33:28.615522  233919 main.go:143] libmachine: Parsing certificate...
	I1019 17:33:28.615603  233919 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:33:28.615631  233919 main.go:143] libmachine: Decoding PEM data...
	I1019 17:33:28.615645  233919 main.go:143] libmachine: Parsing certificate...
	I1019 17:33:28.616069  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:33:28.636317  233919 cli_runner.go:211] docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:33:28.636400  233919 network_create.go:284] running [docker network inspect embed-certs-296314] to gather additional debugging logs...
	I1019 17:33:28.636421  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314
	W1019 17:33:28.656331  233919 cli_runner.go:211] docker network inspect embed-certs-296314 returned with exit code 1
	I1019 17:33:28.656371  233919 network_create.go:287] error running [docker network inspect embed-certs-296314]: docker network inspect embed-certs-296314: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-296314 not found
	I1019 17:33:28.656385  233919 network_create.go:289] output of [docker network inspect embed-certs-296314]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-296314 not found
	
	** /stderr **
	I1019 17:33:28.656476  233919 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:28.678243  233919 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:33:28.678620  233919 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:33:28.679027  233919 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:33:28.679294  233919 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3b07775101cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8a:d8:e7:d0:b2:4a} reservation:<nil>}
	I1019 17:33:28.679689  233919 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10440}
	I1019 17:33:28.679716  233919 network_create.go:124] attempt to create docker network embed-certs-296314 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:33:28.679777  233919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-296314 embed-certs-296314
	I1019 17:33:28.750485  233919 network_create.go:108] docker network embed-certs-296314 192.168.85.0/24 created
	I1019 17:33:28.750519  233919 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-296314" container
	I1019 17:33:28.750791  233919 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:33:28.765359  233919 cli_runner.go:164] Run: docker volume create embed-certs-296314 --label name.minikube.sigs.k8s.io=embed-certs-296314 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:33:28.783905  233919 oci.go:103] Successfully created a docker volume embed-certs-296314
	I1019 17:33:28.783990  233919 cli_runner.go:164] Run: docker run --rm --name embed-certs-296314-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-296314 --entrypoint /usr/bin/test -v embed-certs-296314:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:33:29.413935  233919 oci.go:107] Successfully prepared a docker volume embed-certs-296314
	I1019 17:33:29.413971  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:29.413990  233919 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:33:29.414068  233919 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-296314:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:33:31.585792  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:33:31.631386  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:33:31.649292  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:33:31.649320  232207 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:33:31.736853  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:33:31.736892  232207 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:33:31.847910  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:33:31.847943  232207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:33:31.939519  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:33:31.939591  232207 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:33:32.022269  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:33:32.022310  232207 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:33:32.048604  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:33:32.048647  232207 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:33:32.076458  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:33:32.076495  232207 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:33:32.097066  232207 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:33:32.097138  232207 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:33:32.120250  232207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:33:35.034059  233919 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-296314:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.619942329s)
	I1019 17:33:35.034088  233919 kic.go:203] duration metric: took 5.620094577s to extract preloaded images to volume ...
	W1019 17:33:35.034218  233919 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:33:35.034322  233919 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:33:35.136334  233919 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-296314 --name embed-certs-296314 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-296314 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-296314 --network embed-certs-296314 --ip 192.168.85.2 --volume embed-certs-296314:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:33:35.568393  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Running}}
	I1019 17:33:35.602776  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:35.638678  233919 cli_runner.go:164] Run: docker exec embed-certs-296314 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:33:35.695120  233919 oci.go:144] the created container "embed-certs-296314" has a running status.
	I1019 17:33:35.695160  233919 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa...
	I1019 17:33:36.041962  233919 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:33:36.068712  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:36.095625  233919 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:33:36.095654  233919 kic_runner.go:114] Args: [docker exec --privileged embed-certs-296314 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:33:36.164819  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:33:36.200901  233919 machine.go:94] provisionDockerMachine start ...
	I1019 17:33:36.200998  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:36.236320  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:36.236640  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:36.236649  233919 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:33:36.237219  233919 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50948->127.0.0.1:33103: read: connection reset by peer
	I1019 17:33:37.722053  232207 node_ready.go:49] node "no-preload-038781" is "Ready"
	I1019 17:33:37.722079  232207 node_ready.go:38] duration metric: took 6.160426066s for node "no-preload-038781" to be "Ready" ...
	I1019 17:33:37.722092  232207 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:33:37.722152  232207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:33:37.944654  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.358826256s)
	I1019 17:33:39.538757  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.907336317s)
	I1019 17:33:39.538877  232207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.418546923s)
	I1019 17:33:39.539020  232207 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.81685762s)
	I1019 17:33:39.539040  232207 api_server.go:72] duration metric: took 8.508901382s to wait for apiserver process to appear ...
	I1019 17:33:39.539048  232207 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:33:39.539069  232207 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 17:33:39.541856  232207 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-038781 addons enable metrics-server
	
	I1019 17:33:39.544679  232207 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1019 17:33:39.548725  232207 addons.go:515] duration metric: took 8.518225186s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1019 17:33:39.550996  232207 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 17:33:39.553253  232207 api_server.go:141] control plane version: v1.34.1
	I1019 17:33:39.553282  232207 api_server.go:131] duration metric: took 14.224244ms to wait for apiserver health ...
	I1019 17:33:39.553292  232207 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:33:39.557956  232207 system_pods.go:59] 8 kube-system pods found
	I1019 17:33:39.558002  232207 system_pods.go:61] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:33:39.558011  232207 system_pods.go:61] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:33:39.558017  232207 system_pods.go:61] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:33:39.558025  232207 system_pods.go:61] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:33:39.558033  232207 system_pods.go:61] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:33:39.558046  232207 system_pods.go:61] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:33:39.558056  232207 system_pods.go:61] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:33:39.558061  232207 system_pods.go:61] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:33:39.558074  232207 system_pods.go:74] duration metric: took 4.775581ms to wait for pod list to return data ...
	I1019 17:33:39.558082  232207 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:33:39.561639  232207 default_sa.go:45] found service account: "default"
	I1019 17:33:39.561666  232207 default_sa.go:55] duration metric: took 3.574103ms for default service account to be created ...
	I1019 17:33:39.561676  232207 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:33:39.565301  232207 system_pods.go:86] 8 kube-system pods found
	I1019 17:33:39.565338  232207 system_pods.go:89] "coredns-66bc5c9577-6k8tn" [db59a39e-b75f-4f1b-abb0-099bf1c7526e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:33:39.565347  232207 system_pods.go:89] "etcd-no-preload-038781" [9b504eb5-e911-464a-81f8-4b917f9fd041] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:33:39.565352  232207 system_pods.go:89] "kindnet-t6qjz" [75c3af5d-0b86-49c0-8c67-355e94a238e9] Running
	I1019 17:33:39.565359  232207 system_pods.go:89] "kube-apiserver-no-preload-038781" [3b8b3616-b1d0-4180-9a62-6d08582cc194] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:33:39.565367  232207 system_pods.go:89] "kube-controller-manager-no-preload-038781" [9869e8fa-5be9-4fa2-b35d-f08352e3e157] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:33:39.565373  232207 system_pods.go:89] "kube-proxy-2n5k9" [571f6c31-a383-4d1f-ba97-b0ab16c1b537] Running
	I1019 17:33:39.565389  232207 system_pods.go:89] "kube-scheduler-no-preload-038781" [9e903d79-9094-4d53-a16a-23648f8a79fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:33:39.565397  232207 system_pods.go:89] "storage-provisioner" [356dc8ab-93c3-4567-8229-41c2153acabc] Running
	I1019 17:33:39.565405  232207 system_pods.go:126] duration metric: took 3.72238ms to wait for k8s-apps to be running ...
	I1019 17:33:39.565413  232207 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:33:39.565472  232207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:33:39.580586  232207 system_svc.go:56] duration metric: took 15.16245ms WaitForService to wait for kubelet
	I1019 17:33:39.580609  232207 kubeadm.go:587] duration metric: took 8.550469377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:33:39.580628  232207 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:33:39.584451  232207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:33:39.584480  232207 node_conditions.go:123] node cpu capacity is 2
	I1019 17:33:39.584491  232207 node_conditions.go:105] duration metric: took 3.857094ms to run NodePressure ...
	I1019 17:33:39.584503  232207 start.go:242] waiting for startup goroutines ...
	I1019 17:33:39.584511  232207 start.go:247] waiting for cluster config update ...
	I1019 17:33:39.584521  232207 start.go:256] writing updated cluster config ...
	I1019 17:33:39.584803  232207 ssh_runner.go:195] Run: rm -f paused
	I1019 17:33:39.589618  232207 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:33:39.593812  232207 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:33:39.402619  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:33:39.402685  233919 ubuntu.go:182] provisioning hostname "embed-certs-296314"
	I1019 17:33:39.402778  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:39.439111  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:39.439411  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:39.439423  233919 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-296314 && echo "embed-certs-296314" | sudo tee /etc/hostname
	I1019 17:33:39.616820  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:33:39.616944  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:39.641738  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:39.642053  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:39.642076  233919 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-296314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-296314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-296314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:33:39.806881  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:33:39.806907  233919 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:33:39.806926  233919 ubuntu.go:190] setting up certificates
	I1019 17:33:39.806936  233919 provision.go:84] configureAuth start
	I1019 17:33:39.807005  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:39.829328  233919 provision.go:143] copyHostCerts
	I1019 17:33:39.829399  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:33:39.829413  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:33:39.829492  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:33:39.829588  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:33:39.829599  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:33:39.829629  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:33:39.829683  233919 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:33:39.829692  233919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:33:39.829721  233919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:33:39.829783  233919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.embed-certs-296314 san=[127.0.0.1 192.168.85.2 embed-certs-296314 localhost minikube]
	I1019 17:33:41.062833  233919 provision.go:177] copyRemoteCerts
	I1019 17:33:41.062922  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:33:41.062971  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.083535  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:41.202275  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:33:41.224325  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:33:41.244324  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:33:41.268898  233919 provision.go:87] duration metric: took 1.461939276s to configureAuth
	I1019 17:33:41.268968  233919 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:33:41.269169  233919 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:33:41.269273  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.288644  233919 main.go:143] libmachine: Using SSH client type: native
	I1019 17:33:41.288949  233919 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1019 17:33:41.288974  233919 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:33:41.671524  233919 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:33:41.671546  233919 machine.go:97] duration metric: took 5.470625492s to provisionDockerMachine
	I1019 17:33:41.671555  233919 client.go:174] duration metric: took 13.056148544s to LocalClient.Create
	I1019 17:33:41.671568  233919 start.go:167] duration metric: took 13.0562256s to libmachine.API.Create "embed-certs-296314"
	I1019 17:33:41.671575  233919 start.go:293] postStartSetup for "embed-certs-296314" (driver="docker")
	I1019 17:33:41.671585  233919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:33:41.671648  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:33:41.671687  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.690040  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:41.800857  233919 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:33:41.805164  233919 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:33:41.805193  233919 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:33:41.805208  233919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:33:41.805277  233919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:33:41.805372  233919 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:33:41.805507  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:33:41.823047  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:41.860096  233919 start.go:296] duration metric: took 188.503291ms for postStartSetup
	I1019 17:33:41.860483  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:41.884204  233919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:33:41.884505  233919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:33:41.884551  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:41.912206  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.014639  233919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:33:42.027521  233919 start.go:128] duration metric: took 13.416322745s to createHost
	I1019 17:33:42.027564  233919 start.go:83] releasing machines lock for "embed-certs-296314", held for 13.416490198s
	I1019 17:33:42.027684  233919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:33:42.045796  233919 ssh_runner.go:195] Run: cat /version.json
	I1019 17:33:42.045858  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:42.046102  233919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:33:42.046167  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:33:42.068974  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.088084  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:33:42.200786  233919 ssh_runner.go:195] Run: systemctl --version
	I1019 17:33:42.298583  233919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:33:42.340777  233919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:33:42.344979  233919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:33:42.345092  233919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:33:42.377694  233919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:33:42.377802  233919 start.go:496] detecting cgroup driver to use...
	I1019 17:33:42.377868  233919 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:33:42.377949  233919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:33:42.404791  233919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:33:42.420843  233919 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:33:42.420951  233919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:33:42.442307  233919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:33:42.473465  233919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:33:42.636175  233919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:33:42.769488  233919 docker.go:234] disabling docker service ...
	I1019 17:33:42.769559  233919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:33:42.815639  233919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:33:42.843855  233919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:33:43.038855  233919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:33:43.222919  233919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:33:43.245331  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:33:43.274123  233919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:33:43.274227  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.289339  233919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:33:43.289445  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.311465  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.330812  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.343257  233919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:33:43.354044  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.371089  233919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.389293  233919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:33:43.399585  233919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:33:43.408164  233919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:33:43.416337  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:43.570864  233919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:33:43.773207  233919 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:33:43.773334  233919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:33:43.784739  233919 start.go:564] Will wait 60s for crictl version
	I1019 17:33:43.784901  233919 ssh_runner.go:195] Run: which crictl
	I1019 17:33:43.789752  233919 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:33:43.821496  233919 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:33:43.821632  233919 ssh_runner.go:195] Run: crio --version
	I1019 17:33:43.860255  233919 ssh_runner.go:195] Run: crio --version
	I1019 17:33:43.901042  233919 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 17:33:41.600365  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:43.608114  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:46.101413  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:43.904077  233919 cli_runner.go:164] Run: docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:33:43.922110  233919 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:33:43.926519  233919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:43.937885  233919 kubeadm.go:884] updating cluster {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:33:43.937996  233919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:33:43.938058  233919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:43.978349  233919 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:43.978376  233919 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:33:43.978447  233919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:33:44.015390  233919 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:33:44.015416  233919 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:33:44.015425  233919 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:33:44.015728  233919 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-296314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:33:44.015843  233919 ssh_runner.go:195] Run: crio config
	I1019 17:33:44.097269  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:33:44.097289  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:33:44.097337  233919 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:33:44.097361  233919 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-296314 NodeName:embed-certs-296314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:33:44.097542  233919 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-296314"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:33:44.097635  233919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:33:44.108335  233919 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:33:44.108436  233919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:33:44.117970  233919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 17:33:44.133069  233919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:33:44.148514  233919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 17:33:44.164235  233919 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:33:44.168513  233919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:33:44.179551  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:33:44.349997  233919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:33:44.377036  233919 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314 for IP: 192.168.85.2
	I1019 17:33:44.377064  233919 certs.go:195] generating shared ca certs ...
	I1019 17:33:44.377080  233919 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:44.377301  233919 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:33:44.377376  233919 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:33:44.377393  233919 certs.go:257] generating profile certs ...
	I1019 17:33:44.377460  233919 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key
	I1019 17:33:44.377478  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt with IP's: []
	I1019 17:33:45.427204  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt ...
	I1019 17:33:45.427267  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.crt: {Name:mk9908ee427c9ddcdaffc981e590bcb4b67e75bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.427526  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key ...
	I1019 17:33:45.427544  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key: {Name:mk3d0068edc84eda9125979974dc006ec3e7d3de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.427659  233919 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8
	I1019 17:33:45.427692  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:33:45.890216  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 ...
	I1019 17:33:45.890247  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8: {Name:mk3c2072648c516b64b7c1f4381726280c111d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.890431  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8 ...
	I1019 17:33:45.890447  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8: {Name:mkfee54346a4eed5c6fd19c07a48a7b2f44bee05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:45.890547  233919 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt.d989d7c8 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt
	I1019 17:33:45.890640  233919 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key
	I1019 17:33:45.890706  233919 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key
	I1019 17:33:45.890729  233919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt with IP's: []
	I1019 17:33:46.173874  233919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt ...
	I1019 17:33:46.173903  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt: {Name:mk3b45a1a9b9dd0e89b7a391cef05651ed0f1117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:46.174087  233919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key ...
	I1019 17:33:46.174102  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key: {Name:mk6ffe6968d019c9233d25bf1713984cc3d5332d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:33:46.174291  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:33:46.174337  233919 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:33:46.174351  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:33:46.174378  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:33:46.174415  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:33:46.174446  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:33:46.174491  233919 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:33:46.175140  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:33:46.195127  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:33:46.221170  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:33:46.243675  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:33:46.268214  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:33:46.317488  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:33:46.350153  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:33:46.383588  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:33:46.426247  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:33:46.475242  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:33:46.509757  233919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:33:46.540873  233919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:33:46.564338  233919 ssh_runner.go:195] Run: openssl version
	I1019 17:33:46.570632  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:33:46.579652  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.583866  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.583984  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:33:46.639672  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:33:46.649831  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:33:46.659896  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.664159  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.664272  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:33:46.709834  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:33:46.720413  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:33:46.730359  233919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.735120  233919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.735268  233919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:33:46.784686  233919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:33:46.798785  233919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:33:46.806410  233919 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:33:46.806465  233919 kubeadm.go:401] StartCluster: {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:33:46.806530  233919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:33:46.806692  233919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:33:46.860679  233919 cri.go:89] found id: ""
	I1019 17:33:46.860754  233919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:33:46.878708  233919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:33:46.888471  233919 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:33:46.888536  233919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:33:46.900113  233919 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:33:46.900174  233919 kubeadm.go:158] found existing configuration files:
	
	I1019 17:33:46.900271  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:33:46.911442  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:33:46.911566  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:33:46.920842  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:33:46.930297  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:33:46.930388  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:33:46.940000  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:33:46.950192  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:33:46.950265  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:33:46.961666  233919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:33:46.972242  233919 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:33:46.972376  233919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:33:46.982097  233919 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:33:47.035764  233919 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:33:47.036005  233919 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:33:47.093504  233919 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:33:47.093626  233919 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:33:47.093700  233919 kubeadm.go:319] OS: Linux
	I1019 17:33:47.093776  233919 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:33:47.093852  233919 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:33:47.093937  233919 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:33:47.094007  233919 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:33:47.094067  233919 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:33:47.094124  233919 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:33:47.094194  233919 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:33:47.094272  233919 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:33:47.094354  233919 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:33:47.218784  233919 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:33:47.218944  233919 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:33:47.219082  233919 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:33:47.227195  233919 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:33:47.234259  233919 out.go:252]   - Generating certificates and keys ...
	I1019 17:33:47.234393  233919 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:33:47.234493  233919 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1019 17:33:48.599815  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:51.101852  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:48.485745  233919 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:33:48.993077  233919 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:33:49.243083  233919 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:33:50.041877  233919 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:33:50.602928  233919 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:33:50.603273  233919 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-296314 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:33:51.182615  233919 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:33:51.183227  233919 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-296314 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:33:51.359945  233919 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:33:52.210127  233919 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:33:53.121595  233919 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:33:53.122069  233919 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:33:53.478192  233919 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:33:53.955905  233919 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:33:54.284282  233919 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:33:54.338901  233919 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:33:55.172738  233919 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:33:55.173839  233919 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:33:55.181637  233919 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1019 17:33:53.600096  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:33:55.609783  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:33:55.185025  233919 out.go:252]   - Booting up control plane ...
	I1019 17:33:55.185144  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:33:55.185238  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:33:55.186173  233919 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:33:55.220192  233919 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:33:55.220309  233919 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:33:55.230083  233919 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:33:55.230192  233919 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:33:55.230239  233919 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:33:55.427055  233919 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:33:55.427195  233919 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:33:56.935143  233919 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.508123673s
	I1019 17:33:56.942450  233919 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:33:56.942609  233919 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 17:33:56.943127  233919 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:33:56.943223  233919 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 17:33:58.102080  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:00.599515  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:02.113991  233919 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.170098705s
	I1019 17:34:02.469245  233919 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.52414923s
	I1019 17:34:03.948960  233919 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002467737s
	I1019 17:34:03.969603  233919 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:34:03.982430  233919 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:34:03.998475  233919 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:34:03.998761  233919 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-296314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:34:04.017399  233919 kubeadm.go:319] [bootstrap-token] Using token: eir7xu.5dylgzny1ipwrk2v
	I1019 17:34:04.020403  233919 out.go:252]   - Configuring RBAC rules ...
	I1019 17:34:04.020551  233919 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:34:04.025344  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:34:04.036411  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:34:04.040617  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:34:04.044980  233919 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:34:04.053046  233919 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:34:04.354041  233919 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:34:04.856590  233919 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:34:05.352949  233919 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:34:05.354033  233919 kubeadm.go:319] 
	I1019 17:34:05.354108  233919 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:34:05.354114  233919 kubeadm.go:319] 
	I1019 17:34:05.354194  233919 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:34:05.354199  233919 kubeadm.go:319] 
	I1019 17:34:05.354225  233919 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:34:05.354286  233919 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:34:05.354345  233919 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:34:05.354350  233919 kubeadm.go:319] 
	I1019 17:34:05.354406  233919 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:34:05.354410  233919 kubeadm.go:319] 
	I1019 17:34:05.354459  233919 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:34:05.354467  233919 kubeadm.go:319] 
	I1019 17:34:05.354563  233919 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:34:05.354643  233919 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:34:05.354714  233919 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:34:05.354718  233919 kubeadm.go:319] 
	I1019 17:34:05.354805  233919 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:34:05.354884  233919 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:34:05.354889  233919 kubeadm.go:319] 
	I1019 17:34:05.354975  233919 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eir7xu.5dylgzny1ipwrk2v \
	I1019 17:34:05.355082  233919 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:34:05.355103  233919 kubeadm.go:319] 	--control-plane 
	I1019 17:34:05.355108  233919 kubeadm.go:319] 
	I1019 17:34:05.355204  233919 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:34:05.355209  233919 kubeadm.go:319] 
	I1019 17:34:05.355294  233919 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eir7xu.5dylgzny1ipwrk2v \
	I1019 17:34:05.355399  233919 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:34:05.360433  233919 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:34:05.360674  233919 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:34:05.360787  233919 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:34:05.360807  233919 cni.go:84] Creating CNI manager for ""
	I1019 17:34:05.360818  233919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:34:05.362143  233919 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 17:34:02.600263  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:04.602092  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:05.363435  233919 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:34:05.367630  233919 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:34:05.367647  233919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:34:05.384603  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:34:05.743295  233919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:34:05.743450  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:05.743587  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-296314 minikube.k8s.io/updated_at=2025_10_19T17_34_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=embed-certs-296314 minikube.k8s.io/primary=true
	I1019 17:34:05.898013  233919 ops.go:34] apiserver oom_adj: -16
	I1019 17:34:05.898112  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:06.398229  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:06.899012  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:07.399142  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:07.898660  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:08.398574  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:08.898175  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:09.398425  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:09.898674  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:10.398707  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:10.898511  233919 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:34:11.024267  233919 kubeadm.go:1114] duration metric: took 5.28088454s to wait for elevateKubeSystemPrivileges
	I1019 17:34:11.024298  233919 kubeadm.go:403] duration metric: took 24.217836324s to StartCluster
	I1019 17:34:11.024315  233919 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:11.024375  233919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:34:11.025672  233919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:11.025899  233919 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:34:11.026014  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:34:11.026258  233919 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:34:11.026299  233919 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:34:11.026360  233919 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-296314"
	I1019 17:34:11.026379  233919 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-296314"
	I1019 17:34:11.026404  233919 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:34:11.026665  233919 addons.go:70] Setting default-storageclass=true in profile "embed-certs-296314"
	I1019 17:34:11.026692  233919 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-296314"
	I1019 17:34:11.027018  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.027463  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.030766  233919 out.go:179] * Verifying Kubernetes components...
	I1019 17:34:11.034104  233919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:34:11.072408  233919 addons.go:239] Setting addon default-storageclass=true in "embed-certs-296314"
	I1019 17:34:11.072461  233919 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:34:11.072911  233919 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:34:11.073964  233919 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 17:34:07.099829  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:09.100806  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	W1019 17:34:11.106805  232207 pod_ready.go:104] pod "coredns-66bc5c9577-6k8tn" is not "Ready", error: <nil>
	I1019 17:34:11.077028  233919 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:34:11.077052  233919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:34:11.077116  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:34:11.119502  233919 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:34:11.119523  233919 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:34:11.119672  233919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:34:11.120486  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:34:11.148546  233919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:34:11.341780  233919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:34:11.399391  233919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:34:11.415718  233919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:34:11.506441  233919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:34:11.838177  233919 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:34:11.841583  233919 node_ready.go:35] waiting up to 6m0s for node "embed-certs-296314" to be "Ready" ...
	I1019 17:34:12.145017  233919 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 17:34:12.148755  233919 addons.go:515] duration metric: took 1.122434128s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1019 17:34:12.342872  233919 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-296314" context rescaled to 1 replicas
	I1019 17:34:13.099952  232207 pod_ready.go:94] pod "coredns-66bc5c9577-6k8tn" is "Ready"
	I1019 17:34:13.099982  232207 pod_ready.go:86] duration metric: took 33.506145758s for pod "coredns-66bc5c9577-6k8tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.102952  232207 pod_ready.go:83] waiting for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.108799  232207 pod_ready.go:94] pod "etcd-no-preload-038781" is "Ready"
	I1019 17:34:13.108829  232207 pod_ready.go:86] duration metric: took 5.846406ms for pod "etcd-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.111670  232207 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.116557  232207 pod_ready.go:94] pod "kube-apiserver-no-preload-038781" is "Ready"
	I1019 17:34:13.116584  232207 pod_ready.go:86] duration metric: took 4.886293ms for pod "kube-apiserver-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.119207  232207 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.298000  232207 pod_ready.go:94] pod "kube-controller-manager-no-preload-038781" is "Ready"
	I1019 17:34:13.298030  232207 pod_ready.go:86] duration metric: took 178.7987ms for pod "kube-controller-manager-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.498272  232207 pod_ready.go:83] waiting for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:13.897815  232207 pod_ready.go:94] pod "kube-proxy-2n5k9" is "Ready"
	I1019 17:34:13.897841  232207 pod_ready.go:86] duration metric: took 399.54099ms for pod "kube-proxy-2n5k9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.098177  232207 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.497786  232207 pod_ready.go:94] pod "kube-scheduler-no-preload-038781" is "Ready"
	I1019 17:34:14.497813  232207 pod_ready.go:86] duration metric: took 399.606849ms for pod "kube-scheduler-no-preload-038781" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:14.497825  232207 pod_ready.go:40] duration metric: took 34.908116786s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:34:14.561744  232207 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:34:14.564947  232207 out.go:179] * Done! kubectl is now configured to use "no-preload-038781" cluster and "default" namespace by default
	W1019 17:34:13.845049  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:15.845245  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:18.345153  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:20.345478  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:22.844605  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:25.344435  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:27.346392  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 19 17:34:05 no-preload-038781 crio[653]: time="2025-10-19T17:34:05.41360945Z" level=info msg="Removed container 5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn/dashboard-metrics-scraper" id=0c97ccf4-1ed4-4b8a-ad37-013d59b6a280 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:34:08 no-preload-038781 conmon[1143]: conmon 7295d170c9f1c652ed83 <ninfo>: container 1145 exited with status 1
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.413247901Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f106bf55-df6e-4f8d-b19f-f20b17e67f01 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.417438676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=89bf5b58-39d6-4565-8726-531f4a35f077 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.41944477Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=68404b7a-e19f-4e47-9369-a94ec9da6477 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.419769977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.42611861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426349859Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d927fe5c4c68988d4761a004c9e449a8cfaabfc747301ed2f44d7fcd1db53fba/merged/etc/passwd: no such file or directory"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426377666Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d927fe5c4c68988d4761a004c9e449a8cfaabfc747301ed2f44d7fcd1db53fba/merged/etc/group: no such file or directory"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.426659599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.456697575Z" level=info msg="Created container d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8: kube-system/storage-provisioner/storage-provisioner" id=68404b7a-e19f-4e47-9369-a94ec9da6477 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.457696712Z" level=info msg="Starting container: d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8" id=3a27f68b-5883-44b4-aeb9-61ccd8884f87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:34:09 no-preload-038781 crio[653]: time="2025-10-19T17:34:09.459727627Z" level=info msg="Started container" PID=1641 containerID=d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8 description=kube-system/storage-provisioner/storage-provisioner id=3a27f68b-5883-44b4-aeb9-61ccd8884f87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=007fc521ae5852077d04214ae39535fac08cd0f3cb3aae5f177cecd6b1911e9e
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.813053101Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820151152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820189249Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.820214825Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823385186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823419739Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.823444395Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826451538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826492802Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.826516967Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.829601362Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:34:18 no-preload-038781 crio[653]: time="2025-10-19T17:34:18.829634757Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d1ae7afadcdd6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   007fc521ae585       storage-provisioner                          kube-system
	4e48a039cc1f5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   4c271ef2cef53       dashboard-metrics-scraper-6ffb444bf9-rbgzn   kubernetes-dashboard
	8716b30ad8495       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   338426eafe947       kubernetes-dashboard-855c9754f9-qdn5q        kubernetes-dashboard
	1c6f01729c8ea       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   4158e340d188b       coredns-66bc5c9577-6k8tn                     kube-system
	7295d170c9f1c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   007fc521ae585       storage-provisioner                          kube-system
	aa2e6a947fb42       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   d19ae942ad5e2       kube-proxy-2n5k9                             kube-system
	1dfcb1be4b5bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   c61c3081d54cf       busybox                                      default
	63a21cb0dd8ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   5361b5de5552d       kindnet-t6qjz                                kube-system
	4ecdc75b36a4c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3f65b88bb435f       kube-controller-manager-no-preload-038781    kube-system
	2f46f60d6de64       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   80bb29e47dc3c       etcd-no-preload-038781                       kube-system
	0d0e37aed3838       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   575db676691b8       kube-scheduler-no-preload-038781             kube-system
	536e5d3cd6aab       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2c9f9fbcb5d21       kube-apiserver-no-preload-038781             kube-system
	
	
	==> coredns [1c6f01729c8ea65f68f7c74cd0edce25f7839aa8e906e5eaaf9f59dea56c3592] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48118 - 21177 "HINFO IN 1669950668549980651.2139323910193721934. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025776242s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-038781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-038781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=no-preload-038781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:32:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-038781
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:34:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:34:08 +0000   Sun, 19 Oct 2025 17:32:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-038781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f7908916-dc6b-4011-8ad7-c40cd54a41fa
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-6k8tn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-038781                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-t6qjz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-038781              250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-038781     200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-2n5k9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-038781              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rbgzn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qdn5q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  119s                 kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    119s                 kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s                 kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-038781 event: Registered Node no-preload-038781 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-038781 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-038781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-038781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-038781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-038781 event: Registered Node no-preload-038781 in Controller
	
	
	==> dmesg <==
	[Oct19 17:10] overlayfs: idmapped layers are currently not supported
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2f46f60d6de64b25c99d5aa47d9dc9db10c0069af1a4f16eecbb3dd6f2acb2c4] <==
	{"level":"warn","ts":"2025-10-19T17:33:35.987241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.066440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.093170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.153289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.197231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.247305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.277827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.302748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.321093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.356859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.378808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.424741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.439438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.466787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.490453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.549457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.552911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.585998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.640299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.670647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.706892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.729909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.849209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:36.940617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48360","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:34:31 up  1:16,  0 user,  load average: 3.47, 3.90, 3.45
	Linux no-preload-038781 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63a21cb0dd8ac64312c63edbf6eba4361cba29f0413fe4f5a288ccef35e3d0a1] <==
	I1019 17:33:38.596169       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:33:38.596729       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:33:38.596863       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:33:38.596875       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:33:38.596888       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:33:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:33:38.811197       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:33:38.811297       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:33:38.811330       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:33:38.814474       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:34:08.808589       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:34:08.809721       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:34:08.811068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:34:08.811180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 17:34:10.314504       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:34:10.314582       1 metrics.go:72] Registering metrics
	I1019 17:34:10.314629       1 controller.go:711] "Syncing nftables rules"
	I1019 17:34:18.812720       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:34:18.812775       1 main.go:301] handling current node
	I1019 17:34:28.814833       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:34:28.814882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [536e5d3cd6aab4df09c0f25b4fa64db7b03ae73bd5300a9691e1868e1678cd99] <==
	I1019 17:33:37.954672       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:33:37.955681       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:33:37.955700       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:33:37.956120       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:33:37.956131       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:33:37.956137       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:33:37.956143       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:33:37.960445       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:33:37.960475       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:33:37.960480       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:33:37.960772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:33:37.960808       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:33:37.997912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 17:33:38.073247       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:33:38.073646       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:33:38.544527       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:33:38.797237       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:33:38.868375       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:33:38.910013       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:33:38.925929       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:33:39.005044       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.121.64"}
	I1019 17:33:39.023495       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.40.146"}
	I1019 17:33:41.461118       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:33:41.560461       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:33:41.662693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ecdc75b36a4c7a3c825f206e45adee636659afda96007f457af8b243c9114c0] <==
	I1019 17:33:41.158739       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:33:41.158745       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:33:41.160746       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:33:41.163302       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:33:41.163314       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:33:41.163376       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:33:41.163403       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:33:41.163415       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:33:41.163420       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:33:41.163527       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:33:41.163584       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:33:41.166340       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:33:41.167485       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:33:41.169692       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:33:41.173176       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:33:41.177570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:33:41.177635       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:33:41.177666       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:33:41.184023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:33:41.189006       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:33:41.193515       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:33:41.193776       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:33:41.204519       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:33:41.208709       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:33:41.213119       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [aa2e6a947fb42538c3f95b4e424f09d0784485f208dbe2872cdb5a5c87988222] <==
	I1019 17:33:38.918121       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:33:39.071768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:33:39.180874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:33:39.181499       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:33:39.181647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:33:39.241616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:33:39.241671       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:33:39.262888       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:33:39.263258       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:33:39.263273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:33:39.266077       1 config.go:200] "Starting service config controller"
	I1019 17:33:39.266095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:33:39.299456       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:33:39.299486       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:33:39.299516       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:33:39.299521       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:33:39.327685       1 config.go:309] "Starting node config controller"
	I1019 17:33:39.327706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:33:39.327714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:33:39.366347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:33:39.414930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:33:39.415254       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0d0e37aed3838a493242b37f3c40b53f5f97a88b5709f7d8b16dab4324bbcaef] <==
	I1019 17:33:34.585114       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:33:37.721869       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:33:37.726646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:33:37.726662       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:33:37.726670       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:33:37.898360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:33:37.900726       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:33:37.916184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:33:37.916218       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:33:37.917063       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:33:37.917098       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:33:38.019824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:33:41 no-preload-038781 kubelet[773]: I1019 17:33:41.967974     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7eb3b8ac-a1b4-4677-8411-2b730be7c599-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qdn5q\" (UID: \"7eb3b8ac-a1b4-4677-8411-2b730be7c599\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q"
	Oct 19 17:33:41 no-preload-038781 kubelet[773]: I1019 17:33:41.968047     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2hd\" (UniqueName: \"kubernetes.io/projected/7eb3b8ac-a1b4-4677-8411-2b730be7c599-kube-api-access-vg2hd\") pod \"kubernetes-dashboard-855c9754f9-qdn5q\" (UID: \"7eb3b8ac-a1b4-4677-8411-2b730be7c599\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q"
	Oct 19 17:33:42 no-preload-038781 kubelet[773]: W1019 17:33:42.162683     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4de6d765b1efe4ce1f09d3c85f3e4e51204ed860aa7f0300150a14eb693880cc/crio-4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6 WatchSource:0}: Error finding container 4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6: Status 404 returned error can't find the container with id 4c271ef2cef5396a68aeb6c7e91d14f66c48cddb7255061b24df2bc93cdebff6
	Oct 19 17:33:42 no-preload-038781 kubelet[773]: I1019 17:33:42.601349     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:33:49 no-preload-038781 kubelet[773]: I1019 17:33:49.350049     773 scope.go:117] "RemoveContainer" containerID="2a2c8950c24dc7a570645bde8f9d566c54a6709bfacfc00a45a04d20ca8a3fad"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: I1019 17:33:50.354612     773 scope.go:117] "RemoveContainer" containerID="2a2c8950c24dc7a570645bde8f9d566c54a6709bfacfc00a45a04d20ca8a3fad"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: I1019 17:33:50.354915     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:50 no-preload-038781 kubelet[773]: E1019 17:33:50.355133     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:33:51 no-preload-038781 kubelet[773]: I1019 17:33:51.361500     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:51 no-preload-038781 kubelet[773]: E1019 17:33:51.361665     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:33:52 no-preload-038781 kubelet[773]: I1019 17:33:52.360185     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:33:52 no-preload-038781 kubelet[773]: E1019 17:33:52.360343     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.097164     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.395618     773 scope.go:117] "RemoveContainer" containerID="5935970ce6c1ca95cf364a5498f9a3834093b294763b93c0156d089c501bc51f"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.395812     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: E1019 17:34:05.396044     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:05 no-preload-038781 kubelet[773]: I1019 17:34:05.428416     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdn5q" podStartSLOduration=12.063826499 podStartE2EDuration="24.428397793s" podCreationTimestamp="2025-10-19 17:33:41 +0000 UTC" firstStartedPulling="2025-10-19 17:33:42.462831547 +0000 UTC m=+12.699255866" lastFinishedPulling="2025-10-19 17:33:54.827402841 +0000 UTC m=+25.063827160" observedRunningTime="2025-10-19 17:33:55.401332825 +0000 UTC m=+25.637757161" watchObservedRunningTime="2025-10-19 17:34:05.428397793 +0000 UTC m=+35.664822120"
	Oct 19 17:34:09 no-preload-038781 kubelet[773]: I1019 17:34:09.411845     773 scope.go:117] "RemoveContainer" containerID="7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda"
	Oct 19 17:34:12 no-preload-038781 kubelet[773]: I1019 17:34:12.075064     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:12 no-preload-038781 kubelet[773]: E1019 17:34:12.075735     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:25 no-preload-038781 kubelet[773]: I1019 17:34:25.097127     773 scope.go:117] "RemoveContainer" containerID="4e48a039cc1f53465f147349ed98f336ddd88df5b62813d3cb4b814ca5c16e1d"
	Oct 19 17:34:25 no-preload-038781 kubelet[773]: E1019 17:34:25.097349     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rbgzn_kubernetes-dashboard(870485be-2dd1-45c4-aba2-4cbe146f83ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rbgzn" podUID="870485be-2dd1-45c4-aba2-4cbe146f83ee"
	Oct 19 17:34:26 no-preload-038781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:34:26 no-preload-038781 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:34:26 no-preload-038781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8716b30ad849506fd3f8f4715e585b04ced2a15cf9ed5a6881825f2a54647510] <==
	2025/10/19 17:33:54 Starting overwatch
	2025/10/19 17:33:54 Using namespace: kubernetes-dashboard
	2025/10/19 17:33:54 Using in-cluster config to connect to apiserver
	2025/10/19 17:33:54 Using secret token for csrf signing
	2025/10/19 17:33:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:33:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:33:54 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:33:54 Generating JWE encryption key
	2025/10/19 17:33:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:33:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:33:55 Initializing JWE encryption key from synchronized object
	2025/10/19 17:33:55 Creating in-cluster Sidecar client
	2025/10/19 17:33:55 Serving insecurely on HTTP port: 9090
	2025/10/19 17:33:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:34:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7295d170c9f1c652ed83cb31b1b942d47a5e8f0ac28ddf7808882e1b9c515fda] <==
	I1019 17:33:38.526061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:34:08.527841       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d1ae7afadcdd6d362bde6be2664c6d28fde72b715e677083c6a0695798125bf8] <==
	I1019 17:34:09.515535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:34:09.532223       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:34:09.532293       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:34:09.538283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:12.993856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:17.253839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:20.852047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:23.905459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.928489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.936154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:26.936455       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:34:26.936636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734!
	I1019 17:34:26.937238       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3e86efa-396c-4e58-879b-5827a6d5b481", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734 became leader
	W1019 17:34:26.942009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:26.953325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:27.037304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-038781_e8f696ba-d0f3-4deb-bd76-f5efcded8734!
	W1019 17:34:28.956301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:28.961222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:30.965674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:30.970790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-038781 -n no-preload-038781
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-038781 -n no-preload-038781: exit status 2 (377.922922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-038781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (344.543153ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:35:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-296314 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-296314 describe deploy/metrics-server -n kube-system: exit status 1 (122.323349ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-296314 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-296314
helpers_test.go:243: (dbg) docker inspect embed-certs-296314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	        "Created": "2025-10-19T17:33:35.165314955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:33:35.232947044Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hosts",
	        "LogPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d-json.log",
	        "Name": "/embed-certs-296314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-296314:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-296314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	                "LowerDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-296314",
	                "Source": "/var/lib/docker/volumes/embed-certs-296314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-296314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-296314",
	                "name.minikube.sigs.k8s.io": "embed-certs-296314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c4d04789c0e8fe96a2a15f1d2b8fef965f144badfc2a574c15ab848afda3256",
	            "SandboxKey": "/var/run/docker/netns/0c4d04789c0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-296314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:61:9f:87:3e:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b85768c3935a46e7e3c1643ba28d42a950563959f3252b2b534926365c369610",
	                    "EndpointID": "fcceefd7ec6cedde5430e7ce96e910d80126ed6c533384a837e1afedf6af1fcc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-296314",
	                        "5854ebe0a2d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25: (1.860582496s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-953581 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-953581                │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-953581                │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ ssh     │ -p bridge-953581 sudo crio config                                                                                                                                                                                                             │ bridge-953581                │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ delete  │ -p bridge-953581                                                                                                                                                                                                                              │ bridge-953581                │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:34:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:34:36.363324  239027 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:34:36.363495  239027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:34:36.363523  239027 out.go:374] Setting ErrFile to fd 2...
	I1019 17:34:36.363543  239027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:34:36.363805  239027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:34:36.364279  239027 out.go:368] Setting JSON to false
	I1019 17:34:36.365238  239027 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4624,"bootTime":1760890652,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:34:36.365332  239027 start.go:143] virtualization:  
	I1019 17:34:36.369315  239027 out.go:179] * [default-k8s-diff-port-370596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:34:36.373383  239027 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:34:36.373463  239027 notify.go:221] Checking for updates...
	I1019 17:34:36.379396  239027 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:34:36.382352  239027 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:34:36.385276  239027 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:34:36.388086  239027 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:34:36.391022  239027 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:34:36.394582  239027 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:34:36.394778  239027 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:34:36.428671  239027 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:34:36.428797  239027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:34:36.486920  239027 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:34:36.477699289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:34:36.487028  239027 docker.go:319] overlay module found
	I1019 17:34:36.490237  239027 out.go:179] * Using the docker driver based on user configuration
	I1019 17:34:36.493116  239027 start.go:309] selected driver: docker
	I1019 17:34:36.493138  239027 start.go:930] validating driver "docker" against <nil>
	I1019 17:34:36.493152  239027 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:34:36.493875  239027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:34:36.545472  239027 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:34:36.535828717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:34:36.545639  239027 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 17:34:36.545869  239027 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:34:36.549569  239027 out.go:179] * Using Docker driver with root privileges
	I1019 17:34:36.553048  239027 cni.go:84] Creating CNI manager for ""
	I1019 17:34:36.553126  239027 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:34:36.553135  239027 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:34:36.553453  239027 start.go:353] cluster config:
	{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:34:36.556733  239027 out.go:179] * Starting "default-k8s-diff-port-370596" primary control-plane node in "default-k8s-diff-port-370596" cluster
	I1019 17:34:36.559635  239027 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:34:36.562636  239027 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:34:36.565603  239027 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:34:36.565660  239027 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:34:36.565686  239027 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:34:36.565691  239027 cache.go:59] Caching tarball of preloaded images
	I1019 17:34:36.565781  239027 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:34:36.565790  239027 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:34:36.565898  239027 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:34:36.565924  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json: {Name:mk9ffc67ff06be82eb79fe7259f965a5dfad513f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:36.590508  239027 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:34:36.590730  239027 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:34:36.590765  239027 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:34:36.590789  239027 start.go:360] acquireMachinesLock for default-k8s-diff-port-370596: {Name:mk4e5a46aec1705453bccb79fee591d547fbb19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:34:36.590936  239027 start.go:364] duration metric: took 131.005µs to acquireMachinesLock for "default-k8s-diff-port-370596"
	I1019 17:34:36.590998  239027 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:34:36.591080  239027 start.go:125] createHost starting for "" (driver="docker")
	W1019 17:34:33.845551  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:35.846585  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	I1019 17:34:36.594596  239027 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:34:36.594843  239027 start.go:159] libmachine.API.Create for "default-k8s-diff-port-370596" (driver="docker")
	I1019 17:34:36.594878  239027 client.go:171] LocalClient.Create starting
	I1019 17:34:36.594961  239027 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:34:36.594998  239027 main.go:143] libmachine: Decoding PEM data...
	I1019 17:34:36.595011  239027 main.go:143] libmachine: Parsing certificate...
	I1019 17:34:36.595085  239027 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:34:36.595101  239027 main.go:143] libmachine: Decoding PEM data...
	I1019 17:34:36.595111  239027 main.go:143] libmachine: Parsing certificate...
	I1019 17:34:36.595488  239027 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-370596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:34:36.611011  239027 cli_runner.go:211] docker network inspect default-k8s-diff-port-370596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:34:36.611091  239027 network_create.go:284] running [docker network inspect default-k8s-diff-port-370596] to gather additional debugging logs...
	I1019 17:34:36.611111  239027 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-370596
	W1019 17:34:36.626438  239027 cli_runner.go:211] docker network inspect default-k8s-diff-port-370596 returned with exit code 1
	I1019 17:34:36.626474  239027 network_create.go:287] error running [docker network inspect default-k8s-diff-port-370596]: docker network inspect default-k8s-diff-port-370596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-370596 not found
	I1019 17:34:36.626488  239027 network_create.go:289] output of [docker network inspect default-k8s-diff-port-370596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-370596 not found
	
	** /stderr **
	I1019 17:34:36.626637  239027 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:34:36.648074  239027 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:34:36.648376  239027 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:34:36.648752  239027 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:34:36.649186  239027 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019adb60}
	I1019 17:34:36.649211  239027 network_create.go:124] attempt to create docker network default-k8s-diff-port-370596 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 17:34:36.649283  239027 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-370596 default-k8s-diff-port-370596
	I1019 17:34:36.706871  239027 network_create.go:108] docker network default-k8s-diff-port-370596 192.168.76.0/24 created
	I1019 17:34:36.706921  239027 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-370596" container
	I1019 17:34:36.706992  239027 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:34:36.724718  239027 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-370596 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-370596 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:34:36.744845  239027 oci.go:103] Successfully created a docker volume default-k8s-diff-port-370596
	I1019 17:34:36.744948  239027 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-370596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-370596 --entrypoint /usr/bin/test -v default-k8s-diff-port-370596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:34:37.294026  239027 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-370596
	I1019 17:34:37.294070  239027 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:34:37.294091  239027 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:34:37.294177  239027 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-370596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:34:38.345463  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:40.844592  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:42.845664  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	I1019 17:34:41.687459  239027 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-370596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39323043s)
	I1019 17:34:41.687498  239027 kic.go:203] duration metric: took 4.393395421s to extract preloaded images to volume ...
	W1019 17:34:41.687648  239027 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:34:41.687773  239027 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:34:41.741130  239027 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-370596 --name default-k8s-diff-port-370596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-370596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-370596 --network default-k8s-diff-port-370596 --ip 192.168.76.2 --volume default-k8s-diff-port-370596:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:34:42.118439  239027 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Running}}
	I1019 17:34:42.142402  239027 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:34:42.177657  239027 cli_runner.go:164] Run: docker exec default-k8s-diff-port-370596 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:34:42.254761  239027 oci.go:144] the created container "default-k8s-diff-port-370596" has a running status.
	I1019 17:34:42.254796  239027 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa...
	I1019 17:34:42.947411  239027 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:34:42.969622  239027 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:34:42.993673  239027 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:34:42.993693  239027 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-370596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:34:43.047029  239027 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:34:43.066836  239027 machine.go:94] provisionDockerMachine start ...
	I1019 17:34:43.066952  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:43.085721  239027 main.go:143] libmachine: Using SSH client type: native
	I1019 17:34:43.086143  239027 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1019 17:34:43.086161  239027 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:34:43.086870  239027 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:34:46.242209  239027 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:34:46.242276  239027 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-370596"
	I1019 17:34:46.242389  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:46.260047  239027 main.go:143] libmachine: Using SSH client type: native
	I1019 17:34:46.260347  239027 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1019 17:34:46.260364  239027 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-370596 && echo "default-k8s-diff-port-370596" | sudo tee /etc/hostname
	W1019 17:34:45.346332  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	W1019 17:34:47.844607  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	I1019 17:34:46.416792  239027 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:34:46.416964  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:46.434044  239027 main.go:143] libmachine: Using SSH client type: native
	I1019 17:34:46.434365  239027 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1019 17:34:46.434390  239027 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-370596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-370596/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-370596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:34:46.582704  239027 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:34:46.582741  239027 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:34:46.582766  239027 ubuntu.go:190] setting up certificates
	I1019 17:34:46.582775  239027 provision.go:84] configureAuth start
	I1019 17:34:46.582840  239027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:34:46.601581  239027 provision.go:143] copyHostCerts
	I1019 17:34:46.601664  239027 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:34:46.601677  239027 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:34:46.601753  239027 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:34:46.601850  239027 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:34:46.601860  239027 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:34:46.601886  239027 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:34:46.601949  239027 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:34:46.601958  239027 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:34:46.601981  239027 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:34:46.602038  239027 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-370596 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-370596 localhost minikube]
	I1019 17:34:46.835907  239027 provision.go:177] copyRemoteCerts
	I1019 17:34:46.835967  239027 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:34:46.836007  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:46.854917  239027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:34:46.958320  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:34:46.976136  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:34:46.994391  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:34:47.015609  239027 provision.go:87] duration metric: took 432.82025ms to configureAuth
	I1019 17:34:47.015633  239027 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:34:47.015824  239027 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:34:47.015920  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:47.034105  239027 main.go:143] libmachine: Using SSH client type: native
	I1019 17:34:47.034423  239027 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1019 17:34:47.034438  239027 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:34:47.333989  239027 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:34:47.334051  239027 machine.go:97] duration metric: took 4.267191456s to provisionDockerMachine
	I1019 17:34:47.334076  239027 client.go:174] duration metric: took 10.73919118s to LocalClient.Create
	I1019 17:34:47.334115  239027 start.go:167] duration metric: took 10.739272847s to libmachine.API.Create "default-k8s-diff-port-370596"
	I1019 17:34:47.334144  239027 start.go:293] postStartSetup for "default-k8s-diff-port-370596" (driver="docker")
	I1019 17:34:47.334179  239027 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:34:47.334264  239027 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:34:47.334359  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:47.352183  239027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:34:47.454826  239027 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:34:47.458086  239027 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:34:47.458113  239027 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:34:47.458124  239027 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:34:47.458177  239027 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:34:47.458260  239027 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:34:47.458365  239027 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:34:47.466413  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:34:47.487269  239027 start.go:296] duration metric: took 153.086348ms for postStartSetup
	I1019 17:34:47.487643  239027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:34:47.504270  239027 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:34:47.504582  239027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:34:47.504629  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:47.523962  239027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:34:47.623666  239027 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:34:47.628578  239027 start.go:128] duration metric: took 11.03747974s to createHost
	I1019 17:34:47.628600  239027 start.go:83] releasing machines lock for "default-k8s-diff-port-370596", held for 11.037652312s
	I1019 17:34:47.628669  239027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:34:47.645862  239027 ssh_runner.go:195] Run: cat /version.json
	I1019 17:34:47.645912  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:47.646199  239027 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:34:47.646248  239027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:34:47.667462  239027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:34:47.672410  239027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:34:47.860810  239027 ssh_runner.go:195] Run: systemctl --version
	I1019 17:34:47.867514  239027 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:34:47.904465  239027 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:34:47.909332  239027 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:34:47.909463  239027 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:34:47.938643  239027 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:34:47.938670  239027 start.go:496] detecting cgroup driver to use...
	I1019 17:34:47.938703  239027 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:34:47.938766  239027 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:34:47.957787  239027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:34:47.971591  239027 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:34:47.971673  239027 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:34:47.990274  239027 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:34:48.013395  239027 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:34:48.142455  239027 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:34:48.274375  239027 docker.go:234] disabling docker service ...
	I1019 17:34:48.274441  239027 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:34:48.295656  239027 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:34:48.309238  239027 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:34:48.424073  239027 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:34:48.556831  239027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:34:48.570782  239027 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:34:48.586893  239027 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:34:48.587002  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.597003  239027 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:34:48.597092  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.607607  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.617058  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.626699  239027 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:34:48.635543  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.644498  239027 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.658995  239027 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:34:48.667549  239027 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:34:48.675052  239027 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:34:48.682823  239027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:34:48.806508  239027 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:34:48.940342  239027 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:34:48.940415  239027 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:34:48.945041  239027 start.go:564] Will wait 60s for crictl version
	I1019 17:34:48.945158  239027 ssh_runner.go:195] Run: which crictl
	I1019 17:34:48.949204  239027 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:34:48.975218  239027 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:34:48.975374  239027 ssh_runner.go:195] Run: crio --version
	I1019 17:34:49.004376  239027 ssh_runner.go:195] Run: crio --version
	I1019 17:34:49.039490  239027 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:34:49.042455  239027 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-370596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:34:49.058860  239027 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:34:49.063689  239027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:34:49.073418  239027 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:34:49.073527  239027 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:34:49.073580  239027 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:34:49.111077  239027 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:34:49.111100  239027 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:34:49.111154  239027 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:34:49.135394  239027 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:34:49.135422  239027 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:34:49.135430  239027 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1019 17:34:49.135572  239027 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-370596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:34:49.135654  239027 ssh_runner.go:195] Run: crio config
	I1019 17:34:49.203413  239027 cni.go:84] Creating CNI manager for ""
	I1019 17:34:49.203432  239027 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:34:49.203471  239027 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:34:49.203505  239027 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-370596 NodeName:default-k8s-diff-port-370596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:34:49.203651  239027 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-370596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:34:49.203730  239027 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:34:49.211541  239027 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:34:49.211664  239027 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:34:49.219417  239027 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 17:34:49.232400  239027 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:34:49.245630  239027 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1019 17:34:49.259250  239027 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:34:49.263181  239027 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:34:49.272768  239027 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:34:49.382265  239027 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:34:49.400253  239027 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596 for IP: 192.168.76.2
	I1019 17:34:49.400283  239027 certs.go:195] generating shared ca certs ...
	I1019 17:34:49.400324  239027 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:49.400487  239027 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:34:49.400555  239027 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:34:49.400568  239027 certs.go:257] generating profile certs ...
	I1019 17:34:49.400640  239027 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.key
	I1019 17:34:49.400658  239027 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.crt with IP's: []
	I1019 17:34:49.735029  239027 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.crt ...
	I1019 17:34:49.735061  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.crt: {Name:mk464dc30f2fff07dfd0d4f3b86beed5f22f6ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:49.735254  239027 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.key ...
	I1019 17:34:49.735269  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.key: {Name:mk0e963bf7a2fe29b9127df42533d73767e7dc33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:49.735366  239027 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key.27fdbacf
	I1019 17:34:49.735383  239027 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt.27fdbacf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 17:34:50.205240  239027 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt.27fdbacf ...
	I1019 17:34:50.205276  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt.27fdbacf: {Name:mkdb2befabf16d99aa0ba596c7ffd2ab0c8cb44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:50.205492  239027 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key.27fdbacf ...
	I1019 17:34:50.205511  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key.27fdbacf: {Name:mka76718c5e5c910c2b89dca9d656d50a180020b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:50.205610  239027 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt.27fdbacf -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt
	I1019 17:34:50.205698  239027 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key.27fdbacf -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key
	I1019 17:34:50.205764  239027 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key
	I1019 17:34:50.205785  239027 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.crt with IP's: []
	I1019 17:34:50.289007  239027 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.crt ...
	I1019 17:34:50.289035  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.crt: {Name:mk9048babdd4f777eb50ec857946aa61b764ce6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:50.289201  239027 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key ...
	I1019 17:34:50.289209  239027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key: {Name:mk9096cc15a237fa88f21aa112304a2caf0b6845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:34:50.289383  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:34:50.289425  239027 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:34:50.289434  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:34:50.289460  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:34:50.289486  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:34:50.289510  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:34:50.289553  239027 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:34:50.290198  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:34:50.313030  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:34:50.334083  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:34:50.355663  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:34:50.377618  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 17:34:50.400008  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:34:50.419302  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:34:50.439066  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:34:50.457038  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:34:50.475076  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:34:50.492964  239027 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:34:50.511133  239027 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:34:50.524314  239027 ssh_runner.go:195] Run: openssl version
	I1019 17:34:50.531097  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:34:50.539437  239027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:34:50.543338  239027 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:34:50.543401  239027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:34:50.584329  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:34:50.592437  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:34:50.604879  239027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:34:50.608748  239027 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:34:50.608813  239027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:34:50.649739  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:34:50.658289  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:34:50.666438  239027 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:34:50.669916  239027 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:34:50.669981  239027 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:34:50.711275  239027 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:34:50.719840  239027 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:34:50.723290  239027 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:34:50.723348  239027 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:34:50.723415  239027 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:34:50.723480  239027 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:34:50.750201  239027 cri.go:89] found id: ""
	I1019 17:34:50.750283  239027 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:34:50.758365  239027 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:34:50.766603  239027 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:34:50.766699  239027 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:34:50.774688  239027 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:34:50.774708  239027 kubeadm.go:158] found existing configuration files:
	
	I1019 17:34:50.774779  239027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1019 17:34:50.782555  239027 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:34:50.782622  239027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:34:50.790017  239027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1019 17:34:50.797849  239027 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:34:50.797939  239027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:34:50.805556  239027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1019 17:34:50.814048  239027 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:34:50.814113  239027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:34:50.822384  239027 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1019 17:34:50.830450  239027 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:34:50.830571  239027 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:34:50.838321  239027 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:34:50.881686  239027 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:34:50.881920  239027 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:34:50.904760  239027 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:34:50.904841  239027 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:34:50.904882  239027 kubeadm.go:319] OS: Linux
	I1019 17:34:50.904938  239027 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:34:50.904993  239027 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:34:50.905046  239027 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:34:50.905100  239027 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:34:50.905155  239027 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:34:50.905210  239027 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:34:50.905261  239027 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:34:50.905329  239027 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:34:50.905388  239027 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:34:50.990579  239027 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:34:50.990707  239027 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:34:50.990818  239027 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:34:51.007910  239027 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:34:51.014703  239027 out.go:252]   - Generating certificates and keys ...
	I1019 17:34:51.014817  239027 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:34:51.014891  239027 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1019 17:34:49.845760  233919 node_ready.go:57] node "embed-certs-296314" has "Ready":"False" status (will retry)
	I1019 17:34:51.350762  233919 node_ready.go:49] node "embed-certs-296314" is "Ready"
	I1019 17:34:51.350793  233919 node_ready.go:38] duration metric: took 39.509127017s for node "embed-certs-296314" to be "Ready" ...
	I1019 17:34:51.350806  233919 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:34:51.350880  233919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:34:51.370684  233919 api_server.go:72] duration metric: took 40.344749029s to wait for apiserver process to appear ...
	I1019 17:34:51.370710  233919 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:34:51.370732  233919 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:34:51.385662  233919 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:34:51.387729  233919 api_server.go:141] control plane version: v1.34.1
	I1019 17:34:51.387753  233919 api_server.go:131] duration metric: took 17.036394ms to wait for apiserver health ...
	I1019 17:34:51.387762  233919 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:34:51.413987  233919 system_pods.go:59] 8 kube-system pods found
	I1019 17:34:51.414018  233919 system_pods.go:61] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Pending
	I1019 17:34:51.414024  233919 system_pods.go:61] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running
	I1019 17:34:51.414030  233919 system_pods.go:61] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running
	I1019 17:34:51.414034  233919 system_pods.go:61] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running
	I1019 17:34:51.414039  233919 system_pods.go:61] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running
	I1019 17:34:51.414043  233919 system_pods.go:61] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running
	I1019 17:34:51.414048  233919 system_pods.go:61] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running
	I1019 17:34:51.414052  233919 system_pods.go:61] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Pending
	I1019 17:34:51.414058  233919 system_pods.go:74] duration metric: took 26.290052ms to wait for pod list to return data ...
	I1019 17:34:51.414065  233919 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:34:51.436249  233919 default_sa.go:45] found service account: "default"
	I1019 17:34:51.436326  233919 default_sa.go:55] duration metric: took 22.254242ms for default service account to be created ...
	I1019 17:34:51.436351  233919 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:34:51.445069  233919 system_pods.go:86] 8 kube-system pods found
	I1019 17:34:51.445154  233919 system_pods.go:89] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:34:51.445177  233919 system_pods.go:89] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running
	I1019 17:34:51.445200  233919 system_pods.go:89] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running
	I1019 17:34:51.445234  233919 system_pods.go:89] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running
	I1019 17:34:51.445254  233919 system_pods.go:89] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running
	I1019 17:34:51.445273  233919 system_pods.go:89] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running
	I1019 17:34:51.445293  233919 system_pods.go:89] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running
	I1019 17:34:51.445324  233919 system_pods.go:89] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Pending
	I1019 17:34:51.445357  233919 retry.go:31] will retry after 223.522208ms: missing components: kube-dns
	I1019 17:34:51.684339  233919 system_pods.go:86] 8 kube-system pods found
	I1019 17:34:51.684427  233919 system_pods.go:89] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:34:51.684451  233919 system_pods.go:89] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running
	I1019 17:34:51.684474  233919 system_pods.go:89] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running
	I1019 17:34:51.684508  233919 system_pods.go:89] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running
	I1019 17:34:51.684529  233919 system_pods.go:89] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running
	I1019 17:34:51.684549  233919 system_pods.go:89] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running
	I1019 17:34:51.684580  233919 system_pods.go:89] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running
	I1019 17:34:51.684606  233919 system_pods.go:89] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:34:51.684637  233919 retry.go:31] will retry after 245.566672ms: missing components: kube-dns
	I1019 17:34:51.937546  233919 system_pods.go:86] 8 kube-system pods found
	I1019 17:34:51.937629  233919 system_pods.go:89] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:34:51.937652  233919 system_pods.go:89] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running
	I1019 17:34:51.937676  233919 system_pods.go:89] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running
	I1019 17:34:51.937709  233919 system_pods.go:89] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running
	I1019 17:34:51.937731  233919 system_pods.go:89] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running
	I1019 17:34:51.937752  233919 system_pods.go:89] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running
	I1019 17:34:51.937782  233919 system_pods.go:89] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running
	I1019 17:34:51.937809  233919 system_pods.go:89] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:34:51.937838  233919 retry.go:31] will retry after 393.954743ms: missing components: kube-dns
	I1019 17:34:52.337509  233919 system_pods.go:86] 8 kube-system pods found
	I1019 17:34:52.337546  233919 system_pods.go:89] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Running
	I1019 17:34:52.337553  233919 system_pods.go:89] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running
	I1019 17:34:52.337557  233919 system_pods.go:89] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running
	I1019 17:34:52.337562  233919 system_pods.go:89] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running
	I1019 17:34:52.337566  233919 system_pods.go:89] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running
	I1019 17:34:52.337570  233919 system_pods.go:89] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running
	I1019 17:34:52.337575  233919 system_pods.go:89] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running
	I1019 17:34:52.337578  233919 system_pods.go:89] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Running
	I1019 17:34:52.337588  233919 system_pods.go:126] duration metric: took 901.216599ms to wait for k8s-apps to be running ...
	I1019 17:34:52.337601  233919 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:34:52.337660  233919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:34:52.357109  233919 system_svc.go:56] duration metric: took 19.49878ms WaitForService to wait for kubelet
	I1019 17:34:52.357186  233919 kubeadm.go:587] duration metric: took 41.331254215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:34:52.357223  233919 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:34:52.360888  233919 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:34:52.360985  233919 node_conditions.go:123] node cpu capacity is 2
	I1019 17:34:52.361015  233919 node_conditions.go:105] duration metric: took 3.773332ms to run NodePressure ...
	I1019 17:34:52.361039  233919 start.go:242] waiting for startup goroutines ...
	I1019 17:34:52.361072  233919 start.go:247] waiting for cluster config update ...
	I1019 17:34:52.361102  233919 start.go:256] writing updated cluster config ...
	I1019 17:34:52.361499  233919 ssh_runner.go:195] Run: rm -f paused
	I1019 17:34:52.366242  233919 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:34:52.371106  233919 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2xbw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.378283  233919 pod_ready.go:94] pod "coredns-66bc5c9577-2xbw2" is "Ready"
	I1019 17:34:52.378355  233919 pod_ready.go:86] duration metric: took 7.158998ms for pod "coredns-66bc5c9577-2xbw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.381680  233919 pod_ready.go:83] waiting for pod "etcd-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.388226  233919 pod_ready.go:94] pod "etcd-embed-certs-296314" is "Ready"
	I1019 17:34:52.388311  233919 pod_ready.go:86] duration metric: took 6.556929ms for pod "etcd-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.391889  233919 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.398379  233919 pod_ready.go:94] pod "kube-apiserver-embed-certs-296314" is "Ready"
	I1019 17:34:52.398449  233919 pod_ready.go:86] duration metric: took 6.46686ms for pod "kube-apiserver-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.401552  233919 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.771752  233919 pod_ready.go:94] pod "kube-controller-manager-embed-certs-296314" is "Ready"
	I1019 17:34:52.771850  233919 pod_ready.go:86] duration metric: took 370.223458ms for pod "kube-controller-manager-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:52.972261  233919 pod_ready.go:83] waiting for pod "kube-proxy-5sj42" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:53.372529  233919 pod_ready.go:94] pod "kube-proxy-5sj42" is "Ready"
	I1019 17:34:53.372617  233919 pod_ready.go:86] duration metric: took 400.274685ms for pod "kube-proxy-5sj42" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:53.571181  233919 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:53.972134  233919 pod_ready.go:94] pod "kube-scheduler-embed-certs-296314" is "Ready"
	I1019 17:34:53.972212  233919 pod_ready.go:86] duration metric: took 400.954481ms for pod "kube-scheduler-embed-certs-296314" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:34:53.972253  233919 pod_ready.go:40] duration metric: took 1.605926244s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:34:54.070860  233919 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:34:54.074128  233919 out.go:179] * Done! kubectl is now configured to use "embed-certs-296314" cluster and "default" namespace by default
	I1019 17:34:51.455448  239027 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:34:51.968641  239027 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:34:52.981488  239027 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:34:53.373411  239027 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:34:53.674807  239027 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:34:53.675171  239027 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-370596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:34:54.381213  239027 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:34:54.382687  239027 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-370596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 17:34:55.291630  239027 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:34:55.976866  239027 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:34:56.697239  239027 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:34:56.697478  239027 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:34:57.416308  239027 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:34:57.476957  239027 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:34:58.207887  239027 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:34:58.756852  239027 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:34:59.149401  239027 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:34:59.150556  239027 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:34:59.155072  239027 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:34:59.158486  239027 out.go:252]   - Booting up control plane ...
	I1019 17:34:59.158622  239027 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:34:59.158717  239027 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:34:59.158793  239027 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:34:59.175095  239027 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:34:59.175465  239027 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:34:59.183297  239027 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:34:59.183727  239027 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:34:59.184016  239027 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:34:59.312110  239027 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:34:59.312238  239027 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:35:00.339718  239027 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00684595s
	I1019 17:35:00.351579  239027 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:35:00.351693  239027 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1019 17:35:00.351844  239027 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:35:00.352233  239027 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 19 17:34:51 embed-certs-296314 crio[839]: time="2025-10-19T17:34:51.881149414Z" level=info msg="Created container dbb0b80395236e29b1493036a57f1e82c6365cb1d414d3f18f24143dbad4cb77: kube-system/coredns-66bc5c9577-2xbw2/coredns" id=a83340fb-c1d6-4c51-a6fd-dad1f8dd1fee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:51 embed-certs-296314 crio[839]: time="2025-10-19T17:34:51.886818748Z" level=info msg="Starting container: dbb0b80395236e29b1493036a57f1e82c6365cb1d414d3f18f24143dbad4cb77" id=c2a39689-2c8d-4929-899b-950bd2c21321 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:34:51 embed-certs-296314 crio[839]: time="2025-10-19T17:34:51.889602387Z" level=info msg="Started container" PID=1748 containerID=dbb0b80395236e29b1493036a57f1e82c6365cb1d414d3f18f24143dbad4cb77 description=kube-system/coredns-66bc5c9577-2xbw2/coredns id=c2a39689-2c8d-4929-899b-950bd2c21321 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f0d37a3b8117b64d9cc219b14900c33a733e24465d1def85f4f43adb56a5b65
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.658510907Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d836d6ab-ed7d-4dbb-af4c-ad751014a922 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.65863031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.68039853Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de UID:5ee07b45-0bf9-4e9d-9224-b8525bbf763b NetNS:/var/run/netns/81abd753-fccb-40f3-80a6-ffdd4b93386a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078828}] Aliases:map[]}"
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.680563316Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.70224822Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de UID:5ee07b45-0bf9-4e9d-9224-b8525bbf763b NetNS:/var/run/netns/81abd753-fccb-40f3-80a6-ffdd4b93386a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078828}] Aliases:map[]}"
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.702747214Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.706918295Z" level=info msg="Ran pod sandbox 84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de with infra container: default/busybox/POD" id=d836d6ab-ed7d-4dbb-af4c-ad751014a922 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.708132672Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e965cb9-efca-4d88-b79f-aef3abecb9f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.708406136Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e965cb9-efca-4d88-b79f-aef3abecb9f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.708531028Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8e965cb9-efca-4d88-b79f-aef3abecb9f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.71247461Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8583f08b-da09-4fe2-b5a6-ceb5694d1f8a name=/runtime.v1.ImageService/PullImage
	Oct 19 17:34:54 embed-certs-296314 crio[839]: time="2025-10-19T17:34:54.715262327Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.861688616Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8583f08b-da09-4fe2-b5a6-ceb5694d1f8a name=/runtime.v1.ImageService/PullImage
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.862956582Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31462521-274f-4e54-8cf5-646ffd3ae231 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.867279927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c452c6fc-9df7-4e56-827e-1c12bbab8967 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.875313841Z" level=info msg="Creating container: default/busybox/busybox" id=3751aace-6612-43b3-aa6e-495171cb2060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.876741612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.884885493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.885496907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.905123016Z" level=info msg="Created container 8213526b9666feba6b129b33616e4f50d3bbd7ce427b8da4a64bff17bcd605f8: default/busybox/busybox" id=3751aace-6612-43b3-aa6e-495171cb2060 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.909496947Z" level=info msg="Starting container: 8213526b9666feba6b129b33616e4f50d3bbd7ce427b8da4a64bff17bcd605f8" id=d77e9043-4600-4c4f-aed2-d3b11942172f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:34:56 embed-certs-296314 crio[839]: time="2025-10-19T17:34:56.915452226Z" level=info msg="Started container" PID=1802 containerID=8213526b9666feba6b129b33616e4f50d3bbd7ce427b8da4a64bff17bcd605f8 description=default/busybox/busybox id=d77e9043-4600-4c4f-aed2-d3b11942172f name=/runtime.v1.RuntimeService/StartContainer sandboxID=84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8213526b9666f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   84250d6fe97b4       busybox                                      default
	dbb0b80395236       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   7f0d37a3b8117       coredns-66bc5c9577-2xbw2                     kube-system
	96f0fae199959       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   1d0c72ba2935c       storage-provisioner                          kube-system
	6f7150e1649b3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   6dc62793ba14f       kube-proxy-5sj42                             kube-system
	8d0653ae1b408       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   f7eb1d9693b69       kindnet-7nwqx                                kube-system
	69dffb02a44e1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   422b1e4badff7       kube-controller-manager-embed-certs-296314   kube-system
	7935b9d1256a9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   d229b3062a800       etcd-embed-certs-296314                      kube-system
	df2275ffb216f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   29e4388b6dd07       kube-apiserver-embed-certs-296314            kube-system
	cccb71b51b094       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   3a491e72bcb0f       kube-scheduler-embed-certs-296314            kube-system
	
	
	==> coredns [dbb0b80395236e29b1493036a57f1e82c6365cb1d414d3f18f24143dbad4cb77] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55417 - 22237 "HINFO IN 7052825512543003307.456648896652043840. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.04788469s
	
	
	==> describe nodes <==
	Name:               embed-certs-296314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-296314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-296314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_34_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-296314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:34:51 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:34:51 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:34:51 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:34:51 +0000   Sun, 19 Oct 2025 17:34:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-296314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d8253982-2ff8-43b9-b6f4-cc698577d51f
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-2xbw2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-embed-certs-296314                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-7nwqx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-296314             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-296314    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-5sj42                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-296314             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 68s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 68s)  kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 68s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-296314 event: Registered Node embed-certs-296314 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-296314 status is now: NodeReady
	
	
	==> dmesg <==
	[ +22.762200] overlayfs: idmapped layers are currently not supported
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7935b9d1256a9f53c89c6d3fc319af2a4fc8fd171600b30885776526ea6fe515] <==
	{"level":"warn","ts":"2025-10-19T17:33:59.642246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.662918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.693467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.730895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.764282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.798413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.830754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.879323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.913180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.946998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:33:59.977264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.009018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.050791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.074913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.132191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.159408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.190740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.225062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.251636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.288141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.364440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.424640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.437762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.470909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:34:00.582524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:35:04 up  1:17,  0 user,  load average: 4.23, 4.03, 3.51
	Linux embed-certs-296314 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d0653ae1b4080b08264b5fb85a491aab6720d09933c77c6edebe2204c3fb5e3] <==
	I1019 17:34:10.797969       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:34:10.798979       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:34:10.799181       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:34:10.799236       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:34:10.799271       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:34:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:34:11.004281       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:34:11.004305       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:34:11.004313       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:34:11.005972       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:34:41.004023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:34:41.005375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:34:41.005375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:34:41.006692       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:34:42.204610       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:34:42.204733       1 metrics.go:72] Registering metrics
	I1019 17:34:42.204857       1 controller.go:711] "Syncing nftables rules"
	I1019 17:34:51.011161       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:34:51.011215       1 main.go:301] handling current node
	I1019 17:35:01.006645       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:35:01.006705       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df2275ffb216f12681e94604467a9c46b7467aa9eb105820dfd8b44175e1602c] <==
	I1019 17:34:01.982898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:34:02.020912       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:34:02.045566       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:34:02.046365       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:34:02.076667       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:34:02.104212       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:34:02.104955       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:34:02.562068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:34:02.568991       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:34:02.569014       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:34:03.369220       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:34:03.449514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:34:03.608701       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:34:03.616556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:34:03.617574       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:34:03.622519       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:34:04.556550       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:34:04.810253       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:34:04.854911       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:34:04.867238       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:34:09.769932       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:34:09.775544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:34:10.257214       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:34:10.460963       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1019 17:35:02.562159       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:38314: use of closed network connection
	
	
	==> kube-controller-manager [69dffb02a44e1d4e0b2ede7003b62967da9f3e45f81260ef01f0173d66bd2a18] <==
	I1019 17:34:09.658871       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:34:09.659027       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:34:09.660336       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:34:09.660461       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:34:09.662350       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:34:09.662521       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:34:09.662561       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:34:09.662861       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:34:09.664137       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:34:09.664809       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:34:09.665227       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:34:09.665351       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:34:09.665396       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:34:09.665437       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:34:09.665466       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:34:09.665475       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:34:09.665481       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:34:09.667626       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:34:09.667782       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:34:09.670786       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:34:09.682919       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-296314" podCIDRs=["10.244.0.0/24"]
	I1019 17:34:09.701977       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:34:09.702071       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:34:09.702103       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:34:54.868107       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6f7150e1649b3e9c78c3d002535c39398c7875ec9e3e2cad6593ea354be990f9] <==
	I1019 17:34:10.751280       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:34:10.868264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:34:10.969886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:34:10.983396       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:34:10.983485       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:34:11.162857       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:34:11.162920       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:34:11.178797       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:34:11.179154       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:34:11.179171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:34:11.180345       1 config.go:200] "Starting service config controller"
	I1019 17:34:11.180358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:34:11.186828       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:34:11.186848       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:34:11.186870       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:34:11.186874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:34:11.187326       1 config.go:309] "Starting node config controller"
	I1019 17:34:11.187334       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:34:11.187340       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:34:11.280790       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:34:11.287464       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:34:11.287500       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cccb71b51b0945372962a5488b67846ca4271f6f2dc28a22e74d9bfbf0a5740b] <==
	I1019 17:34:02.008175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:34:02.008237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:34:02.047478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 17:34:02.063292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:34:02.070978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:34:02.071059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:34:02.071109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:34:02.071157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:34:02.071500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:34:02.074851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:34:02.074991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:34:02.075131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:34:02.075221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:34:02.075274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:34:02.075344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:34:02.075386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:34:02.075436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:34:02.075484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:34:02.075566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:34:02.075660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:34:02.075709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:34:02.075780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 17:34:02.937035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:34:03.307861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 17:34:05.348219       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:34:09 embed-certs-296314 kubelet[1306]: I1019 17:34:09.717631    1306 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:34:09 embed-certs-296314 kubelet[1306]: I1019 17:34:09.718917    1306 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416483    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5844ea2d-de90-4b67-98f7-3794f9f89ce5-lib-modules\") pod \"kindnet-7nwqx\" (UID: \"5844ea2d-de90-4b67-98f7-3794f9f89ce5\") " pod="kube-system/kindnet-7nwqx"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416576    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rn4h\" (UniqueName: \"kubernetes.io/projected/5844ea2d-de90-4b67-98f7-3794f9f89ce5-kube-api-access-6rn4h\") pod \"kindnet-7nwqx\" (UID: \"5844ea2d-de90-4b67-98f7-3794f9f89ce5\") " pod="kube-system/kindnet-7nwqx"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416667    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5844ea2d-de90-4b67-98f7-3794f9f89ce5-cni-cfg\") pod \"kindnet-7nwqx\" (UID: \"5844ea2d-de90-4b67-98f7-3794f9f89ce5\") " pod="kube-system/kindnet-7nwqx"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416760    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5844ea2d-de90-4b67-98f7-3794f9f89ce5-xtables-lock\") pod \"kindnet-7nwqx\" (UID: \"5844ea2d-de90-4b67-98f7-3794f9f89ce5\") " pod="kube-system/kindnet-7nwqx"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416786    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95ffe5ff-ab85-4793-8d88-3389d2efd9b3-kube-proxy\") pod \"kube-proxy-5sj42\" (UID: \"95ffe5ff-ab85-4793-8d88-3389d2efd9b3\") " pod="kube-system/kube-proxy-5sj42"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416804    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95ffe5ff-ab85-4793-8d88-3389d2efd9b3-xtables-lock\") pod \"kube-proxy-5sj42\" (UID: \"95ffe5ff-ab85-4793-8d88-3389d2efd9b3\") " pod="kube-system/kube-proxy-5sj42"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416879    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95ffe5ff-ab85-4793-8d88-3389d2efd9b3-lib-modules\") pod \"kube-proxy-5sj42\" (UID: \"95ffe5ff-ab85-4793-8d88-3389d2efd9b3\") " pod="kube-system/kube-proxy-5sj42"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.416912    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtk4\" (UniqueName: \"kubernetes.io/projected/95ffe5ff-ab85-4793-8d88-3389d2efd9b3-kube-api-access-7gtk4\") pod \"kube-proxy-5sj42\" (UID: \"95ffe5ff-ab85-4793-8d88-3389d2efd9b3\") " pod="kube-system/kube-proxy-5sj42"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.530066    1306 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: W1019 17:34:10.613601    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-f7eb1d9693b693457d0b1db2c4b8f4c8ae75045bf09c7d3a0d419ee6ff725052 WatchSource:0}: Error finding container f7eb1d9693b693457d0b1db2c4b8f4c8ae75045bf09c7d3a0d419ee6ff725052: Status 404 returned error can't find the container with id f7eb1d9693b693457d0b1db2c4b8f4c8ae75045bf09c7d3a0d419ee6ff725052
	Oct 19 17:34:10 embed-certs-296314 kubelet[1306]: I1019 17:34:10.999200    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5sj42" podStartSLOduration=0.999126574 podStartE2EDuration="999.126574ms" podCreationTimestamp="2025-10-19 17:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:34:10.962050937 +0000 UTC m=+6.352001059" watchObservedRunningTime="2025-10-19 17:34:10.999126574 +0000 UTC m=+6.389076696"
	Oct 19 17:34:13 embed-certs-296314 kubelet[1306]: I1019 17:34:13.079085    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7nwqx" podStartSLOduration=3.079064922 podStartE2EDuration="3.079064922s" podCreationTimestamp="2025-10-19 17:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:34:11.036746154 +0000 UTC m=+6.426696284" watchObservedRunningTime="2025-10-19 17:34:13.079064922 +0000 UTC m=+8.469015036"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: I1019 17:34:51.320249    1306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: E1019 17:34:51.397841    1306 status_manager.go:1018] "Failed to get status for pod" err="pods \"storage-provisioner\" is forbidden: User \"system:node:embed-certs-296314\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-296314' and this object" podUID="58c446f1-5fc6-41fd-b166-9bc2c8bc198b" pod="kube-system/storage-provisioner"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: I1019 17:34:51.532302    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58c446f1-5fc6-41fd-b166-9bc2c8bc198b-tmp\") pod \"storage-provisioner\" (UID: \"58c446f1-5fc6-41fd-b166-9bc2c8bc198b\") " pod="kube-system/storage-provisioner"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: I1019 17:34:51.532500    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64sj\" (UniqueName: \"kubernetes.io/projected/2ed769db-2036-4c5d-8e6a-acfc55d1d5f3-kube-api-access-m64sj\") pod \"coredns-66bc5c9577-2xbw2\" (UID: \"2ed769db-2036-4c5d-8e6a-acfc55d1d5f3\") " pod="kube-system/coredns-66bc5c9577-2xbw2"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: I1019 17:34:51.532608    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r487p\" (UniqueName: \"kubernetes.io/projected/58c446f1-5fc6-41fd-b166-9bc2c8bc198b-kube-api-access-r487p\") pod \"storage-provisioner\" (UID: \"58c446f1-5fc6-41fd-b166-9bc2c8bc198b\") " pod="kube-system/storage-provisioner"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: I1019 17:34:51.532698    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed769db-2036-4c5d-8e6a-acfc55d1d5f3-config-volume\") pod \"coredns-66bc5c9577-2xbw2\" (UID: \"2ed769db-2036-4c5d-8e6a-acfc55d1d5f3\") " pod="kube-system/coredns-66bc5c9577-2xbw2"
	Oct 19 17:34:51 embed-certs-296314 kubelet[1306]: W1019 17:34:51.775554    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-7f0d37a3b8117b64d9cc219b14900c33a733e24465d1def85f4f43adb56a5b65 WatchSource:0}: Error finding container 7f0d37a3b8117b64d9cc219b14900c33a733e24465d1def85f4f43adb56a5b65: Status 404 returned error can't find the container with id 7f0d37a3b8117b64d9cc219b14900c33a733e24465d1def85f4f43adb56a5b65
	Oct 19 17:34:52 embed-certs-296314 kubelet[1306]: I1019 17:34:52.067141    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2xbw2" podStartSLOduration=42.06712084 podStartE2EDuration="42.06712084s" podCreationTimestamp="2025-10-19 17:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:34:52.044530856 +0000 UTC m=+47.434480978" watchObservedRunningTime="2025-10-19 17:34:52.06712084 +0000 UTC m=+47.457070954"
	Oct 19 17:34:54 embed-certs-296314 kubelet[1306]: I1019 17:34:54.348487    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.348460137000004 podStartE2EDuration="42.348460137s" podCreationTimestamp="2025-10-19 17:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:34:52.095149174 +0000 UTC m=+47.485099287" watchObservedRunningTime="2025-10-19 17:34:54.348460137 +0000 UTC m=+49.738410251"
	Oct 19 17:34:54 embed-certs-296314 kubelet[1306]: I1019 17:34:54.464102    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wh9n\" (UniqueName: \"kubernetes.io/projected/5ee07b45-0bf9-4e9d-9224-b8525bbf763b-kube-api-access-9wh9n\") pod \"busybox\" (UID: \"5ee07b45-0bf9-4e9d-9224-b8525bbf763b\") " pod="default/busybox"
	Oct 19 17:34:54 embed-certs-296314 kubelet[1306]: W1019 17:34:54.704715    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de WatchSource:0}: Error finding container 84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de: Status 404 returned error can't find the container with id 84250d6fe97b48b7854ce3cb97e773bfd6c63c6934637a9bba750623e10c87de
	
	
	==> storage-provisioner [96f0fae19995987757f19c0d8e5d185db5c2c005cb2d1992341c4300e1a3d864] <==
	I1019 17:34:51.807735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:34:51.829524       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:34:51.829644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:34:51.837495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:51.844676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:51.845136       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:34:51.845381       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_db682676-13a4-44ac-b93d-53ef222bb8ba!
	I1019 17:34:51.849859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3e02ef7-e677-43d7-8f2d-de68a05d0331", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-296314_db682676-13a4-44ac-b93d-53ef222bb8ba became leader
	W1019 17:34:51.850101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:51.865799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:34:51.958635       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_db682676-13a4-44ac-b93d-53ef222bb8ba!
	W1019 17:34:53.869981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:53.878870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:55.883484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:55.889655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:57.893434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:57.899545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:59.903881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:34:59.909971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:01.913791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:01.922212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:03.935339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:03.943344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-296314 -n embed-certs-296314
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-296314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.772236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-370596 describe deploy/metrics-server -n kube-system: exit status 1 (98.212571ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-370596 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-370596
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-370596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	        "Created": "2025-10-19T17:34:41.755702895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:34:41.828074803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47-json.log",
	        "Name": "/default-k8s-diff-port-370596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-370596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-370596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	                "LowerDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-370596",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-370596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-370596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1bfa38a18ba14112a83b53ec545f460f88d96b291438a7ee709795df304125e2",
	            "SandboxKey": "/var/run/docker/netns/1bfa38a18ba1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-370596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:2a:6a:35:90:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ae64488c7e77a883b5d278e8675d09c05353cf5ff587cc6ffef79a9a972f538",
	                    "EndpointID": "ca38cbb77a9ee050970b70245a0c6f93c6c0589e0d39106d8f90165c96c074cc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-370596",
	                        "fe1a19329d9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
E1019 17:36:08.969080    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25
E1019 17:36:09.212739    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25: (1.168866979s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-125363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │                     │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ stop    │ -p old-k8s-version-125363 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:31 UTC │ 19 Oct 25 17:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:32 UTC │
	│ start   │ -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:32 UTC │ 19 Oct 25 17:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                                                                                               │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:35:18
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:35:18.516928  242330 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:35:18.517070  242330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:35:18.517082  242330 out.go:374] Setting ErrFile to fd 2...
	I1019 17:35:18.517087  242330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:35:18.517372  242330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:35:18.517823  242330 out.go:368] Setting JSON to false
	I1019 17:35:18.518814  242330 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4666,"bootTime":1760890652,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:35:18.518885  242330 start.go:143] virtualization:  
	I1019 17:35:18.522064  242330 out.go:179] * [embed-certs-296314] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:35:18.526170  242330 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:35:18.526285  242330 notify.go:221] Checking for updates...
	I1019 17:35:18.532317  242330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:35:18.535257  242330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:35:18.538653  242330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:35:18.541683  242330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:35:18.544676  242330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:35:18.548065  242330 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:35:18.548666  242330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:35:18.583423  242330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:35:18.583547  242330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:35:18.659580  242330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:35:18.64583555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:35:18.659693  242330 docker.go:319] overlay module found
	I1019 17:35:18.662813  242330 out.go:179] * Using the docker driver based on existing profile
	I1019 17:35:18.665779  242330 start.go:309] selected driver: docker
	I1019 17:35:18.665799  242330 start.go:930] validating driver "docker" against &{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:35:18.665905  242330 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:35:18.666661  242330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:35:18.723076  242330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:35:18.713809518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:35:18.723468  242330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:35:18.723500  242330 cni.go:84] Creating CNI manager for ""
	I1019 17:35:18.723553  242330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:35:18.723598  242330 start.go:353] cluster config:
	{Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:35:18.726759  242330 out.go:179] * Starting "embed-certs-296314" primary control-plane node in "embed-certs-296314" cluster
	I1019 17:35:18.729681  242330 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:35:18.732699  242330 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:35:18.735569  242330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:35:18.735653  242330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:35:18.735670  242330 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:35:18.735698  242330 cache.go:59] Caching tarball of preloaded images
	I1019 17:35:18.735793  242330 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:35:18.735803  242330 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:35:18.735909  242330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:35:18.769908  242330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:35:18.769931  242330 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:35:18.769947  242330 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:35:18.769969  242330 start.go:360] acquireMachinesLock for embed-certs-296314: {Name:mkbadf116eb8b8b2fc66452f2f3b93b38bb1a004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:35:18.770036  242330 start.go:364] duration metric: took 45.72µs to acquireMachinesLock for "embed-certs-296314"
	I1019 17:35:18.770059  242330 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:35:18.770069  242330 fix.go:54] fixHost starting: 
	I1019 17:35:18.770323  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:18.788087  242330 fix.go:112] recreateIfNeeded on embed-certs-296314: state=Stopped err=<nil>
	W1019 17:35:18.788117  242330 fix.go:138] unexpected machine state, will restart: <nil>
	W1019 17:35:17.134796  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:19.634828  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	I1019 17:35:18.791381  242330 out.go:252] * Restarting existing docker container for "embed-certs-296314" ...
	I1019 17:35:18.791482  242330 cli_runner.go:164] Run: docker start embed-certs-296314
	I1019 17:35:19.041832  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:19.064354  242330 kic.go:430] container "embed-certs-296314" state is running.
	I1019 17:35:19.065291  242330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:35:19.088726  242330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/config.json ...
	I1019 17:35:19.088962  242330 machine.go:94] provisionDockerMachine start ...
	I1019 17:35:19.089029  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:19.109590  242330 main.go:143] libmachine: Using SSH client type: native
	I1019 17:35:19.110109  242330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1019 17:35:19.110124  242330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:35:19.111129  242330 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:35:22.262242  242330 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:35:22.262310  242330 ubuntu.go:182] provisioning hostname "embed-certs-296314"
	I1019 17:35:22.262402  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:22.280717  242330 main.go:143] libmachine: Using SSH client type: native
	I1019 17:35:22.281026  242330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1019 17:35:22.281042  242330 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-296314 && echo "embed-certs-296314" | sudo tee /etc/hostname
	I1019 17:35:22.446425  242330 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-296314
	
	I1019 17:35:22.446503  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:22.464657  242330 main.go:143] libmachine: Using SSH client type: native
	I1019 17:35:22.464967  242330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1019 17:35:22.464989  242330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-296314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-296314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-296314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:35:22.615135  242330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:35:22.615180  242330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:35:22.615203  242330 ubuntu.go:190] setting up certificates
	I1019 17:35:22.615213  242330 provision.go:84] configureAuth start
	I1019 17:35:22.615278  242330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:35:22.635331  242330 provision.go:143] copyHostCerts
	I1019 17:35:22.635400  242330 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:35:22.635418  242330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:35:22.635517  242330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:35:22.635657  242330 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:35:22.635663  242330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:35:22.635690  242330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:35:22.635742  242330 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:35:22.635747  242330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:35:22.635769  242330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:35:22.635819  242330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.embed-certs-296314 san=[127.0.0.1 192.168.85.2 embed-certs-296314 localhost minikube]
	I1019 17:35:23.459519  242330 provision.go:177] copyRemoteCerts
	I1019 17:35:23.459582  242330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:35:23.459630  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:23.481063  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:23.587570  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:35:23.607305  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:35:23.627939  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:35:23.655412  242330 provision.go:87] duration metric: took 1.040184426s to configureAuth
	I1019 17:35:23.655444  242330 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:35:23.655674  242330 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:35:23.655797  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:23.674767  242330 main.go:143] libmachine: Using SSH client type: native
	I1019 17:35:23.675085  242330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1019 17:35:23.675105  242330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:35:24.004253  242330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:35:24.004284  242330 machine.go:97] duration metric: took 4.915311827s to provisionDockerMachine
	I1019 17:35:24.004297  242330 start.go:293] postStartSetup for "embed-certs-296314" (driver="docker")
	I1019 17:35:24.004308  242330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:35:24.004397  242330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:35:24.004466  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:24.029326  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:24.139120  242330 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:35:24.142837  242330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:35:24.142868  242330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:35:24.142880  242330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:35:24.142937  242330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:35:24.143022  242330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:35:24.143126  242330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:35:24.151042  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:35:24.170257  242330 start.go:296] duration metric: took 165.943995ms for postStartSetup
	I1019 17:35:24.170375  242330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:35:24.170439  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:24.194082  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:24.295999  242330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:35:24.301598  242330 fix.go:56] duration metric: took 5.531507851s for fixHost
	I1019 17:35:24.301655  242330 start.go:83] releasing machines lock for "embed-certs-296314", held for 5.531606995s
	I1019 17:35:24.301770  242330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-296314
	I1019 17:35:24.319420  242330 ssh_runner.go:195] Run: cat /version.json
	I1019 17:35:24.319516  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:24.319806  242330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:35:24.319877  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:24.344360  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:24.350865  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:24.541406  242330 ssh_runner.go:195] Run: systemctl --version
	I1019 17:35:24.548331  242330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:35:24.589382  242330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:35:24.594345  242330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:35:24.594415  242330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:35:24.603633  242330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:35:24.603654  242330 start.go:496] detecting cgroup driver to use...
	I1019 17:35:24.603685  242330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:35:24.603743  242330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:35:24.619766  242330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:35:24.639175  242330 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:35:24.639260  242330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:35:24.673247  242330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:35:24.689482  242330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:35:24.809883  242330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:35:24.932019  242330 docker.go:234] disabling docker service ...
	I1019 17:35:24.932163  242330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:35:24.949014  242330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:35:24.965984  242330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:35:25.116631  242330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:35:25.248590  242330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:35:25.262023  242330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:35:25.276504  242330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:35:25.276580  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.285404  242330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:35:25.285477  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.294640  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.303457  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.312646  242330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:35:25.321453  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.331259  242330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.340067  242330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:35:25.348774  242330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:35:25.356204  242330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:35:25.363996  242330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:35:25.474020  242330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:35:25.622850  242330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:35:25.622999  242330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:35:25.630499  242330 start.go:564] Will wait 60s for crictl version
	I1019 17:35:25.630691  242330 ssh_runner.go:195] Run: which crictl
	I1019 17:35:25.638827  242330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:35:25.665886  242330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:35:25.666022  242330 ssh_runner.go:195] Run: crio --version
	I1019 17:35:25.695220  242330 ssh_runner.go:195] Run: crio --version
	I1019 17:35:25.727402  242330 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 17:35:22.136388  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:24.635383  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	I1019 17:35:25.730231  242330 cli_runner.go:164] Run: docker network inspect embed-certs-296314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:35:25.747597  242330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:35:25.751975  242330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:35:25.763625  242330 kubeadm.go:884] updating cluster {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:35:25.763796  242330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:35:25.763882  242330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:35:25.799282  242330 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:35:25.799306  242330 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:35:25.799368  242330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:35:25.830514  242330 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:35:25.830571  242330 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:35:25.830579  242330 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:35:25.830686  242330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-296314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:35:25.830773  242330 ssh_runner.go:195] Run: crio config
	I1019 17:35:25.887997  242330 cni.go:84] Creating CNI manager for ""
	I1019 17:35:25.888023  242330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:35:25.888039  242330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:35:25.888063  242330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-296314 NodeName:embed-certs-296314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:35:25.888246  242330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-296314"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:35:25.888324  242330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:35:25.896621  242330 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:35:25.896690  242330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:35:25.904780  242330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 17:35:25.920403  242330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:35:25.937070  242330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 17:35:25.951023  242330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:35:25.955016  242330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:35:25.965378  242330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:35:26.082988  242330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:35:26.100045  242330 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314 for IP: 192.168.85.2
	I1019 17:35:26.100108  242330 certs.go:195] generating shared ca certs ...
	I1019 17:35:26.100141  242330 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:35:26.100317  242330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:35:26.100401  242330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:35:26.100433  242330 certs.go:257] generating profile certs ...
	I1019 17:35:26.100541  242330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/client.key
	I1019 17:35:26.100621  242330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key.d989d7c8
	I1019 17:35:26.100693  242330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key
	I1019 17:35:26.100827  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:35:26.100886  242330 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:35:26.100912  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:35:26.100967  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:35:26.101019  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:35:26.101070  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:35:26.101135  242330 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:35:26.101756  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:35:26.124784  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:35:26.150010  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:35:26.173814  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:35:26.197674  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:35:26.221388  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:35:26.249318  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:35:26.273702  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/embed-certs-296314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:35:26.301753  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:35:26.327392  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:35:26.347741  242330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:35:26.368417  242330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:35:26.383673  242330 ssh_runner.go:195] Run: openssl version
	I1019 17:35:26.390056  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:35:26.398613  242330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:35:26.402340  242330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:35:26.402406  242330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:35:26.445644  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:35:26.454515  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:35:26.464492  242330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:35:26.468663  242330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:35:26.468758  242330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:35:26.509773  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:35:26.518471  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:35:26.527104  242330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:35:26.531221  242330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:35:26.531321  242330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:35:26.572423  242330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:35:26.580465  242330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:35:26.584404  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:35:26.625945  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:35:26.673232  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:35:26.715559  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:35:26.761494  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:35:26.808845  242330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:35:26.872150  242330 kubeadm.go:401] StartCluster: {Name:embed-certs-296314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:35:26.872304  242330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:35:26.872412  242330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:35:26.946265  242330 cri.go:89] found id: "419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b"
	I1019 17:35:26.946325  242330 cri.go:89] found id: "f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431"
	I1019 17:35:26.946355  242330 cri.go:89] found id: "601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab"
	I1019 17:35:26.946372  242330 cri.go:89] found id: "1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e"
	I1019 17:35:26.946415  242330 cri.go:89] found id: ""
	I1019 17:35:26.946498  242330 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:35:26.974239  242330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:35:26Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:35:26.974377  242330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:35:26.996774  242330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:35:26.996844  242330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:35:26.996925  242330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:35:27.015574  242330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:35:27.016276  242330 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-296314" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:35:27.016617  242330 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-296314" cluster setting kubeconfig missing "embed-certs-296314" context setting]
	I1019 17:35:27.017154  242330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:35:27.019028  242330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:35:27.040946  242330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:35:27.041028  242330 kubeadm.go:602] duration metric: took 44.165835ms to restartPrimaryControlPlane
	I1019 17:35:27.041068  242330 kubeadm.go:403] duration metric: took 168.937855ms to StartCluster
	I1019 17:35:27.041099  242330 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:35:27.041182  242330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:35:27.042636  242330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:35:27.042994  242330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:35:27.043577  242330 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:35:27.043533  242330 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:35:27.043742  242330 addons.go:70] Setting dashboard=true in profile "embed-certs-296314"
	I1019 17:35:27.043772  242330 addons.go:239] Setting addon dashboard=true in "embed-certs-296314"
	W1019 17:35:27.043784  242330 addons.go:248] addon dashboard should already be in state true
	I1019 17:35:27.043811  242330 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:35:27.043755  242330 addons.go:70] Setting default-storageclass=true in profile "embed-certs-296314"
	I1019 17:35:27.043919  242330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-296314"
	I1019 17:35:27.044261  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:27.044297  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:27.043747  242330 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-296314"
	I1019 17:35:27.046519  242330 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-296314"
	W1019 17:35:27.046548  242330 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:35:27.046587  242330 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:35:27.047101  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:27.049524  242330 out.go:179] * Verifying Kubernetes components...
	I1019 17:35:27.053614  242330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:35:27.098261  242330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:35:27.102381  242330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:35:27.102404  242330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:35:27.102474  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:27.113455  242330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:35:27.113949  242330 addons.go:239] Setting addon default-storageclass=true in "embed-certs-296314"
	W1019 17:35:27.113966  242330 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:35:27.113990  242330 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:35:27.114454  242330 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:35:27.123709  242330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:35:27.128401  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:35:27.128436  242330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:35:27.128507  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:27.168906  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:27.169515  242330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:35:27.169536  242330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:35:27.169588  242330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:35:27.190941  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:27.200202  242330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:35:27.405342  242330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:35:27.450171  242330 node_ready.go:35] waiting up to 6m0s for node "embed-certs-296314" to be "Ready" ...
	I1019 17:35:27.454907  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:35:27.454931  242330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:35:27.504777  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:35:27.504799  242330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:35:27.507891  242330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:35:27.527652  242330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:35:27.544487  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:35:27.544551  242330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:35:27.581390  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:35:27.581453  242330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:35:27.656739  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:35:27.656808  242330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:35:27.753409  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:35:27.753492  242330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:35:27.779933  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:35:27.779999  242330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:35:27.802936  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:35:27.803004  242330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:35:27.821284  242330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:35:27.821350  242330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:35:27.836619  242330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1019 17:35:27.135571  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:29.633731  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	I1019 17:35:31.695598  242330 node_ready.go:49] node "embed-certs-296314" is "Ready"
	I1019 17:35:31.695627  242330 node_ready.go:38] duration metric: took 4.245359846s for node "embed-certs-296314" to be "Ready" ...
	I1019 17:35:31.695640  242330 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:35:31.695716  242330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:35:33.290465  242330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.782537293s)
	I1019 17:35:33.290555  242330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.762883252s)
	I1019 17:35:33.290693  242330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.453994033s)
	I1019 17:35:33.290859  242330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.595130631s)
	I1019 17:35:33.290876  242330 api_server.go:72] duration metric: took 6.247743927s to wait for apiserver process to appear ...
	I1019 17:35:33.290882  242330 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:35:33.290894  242330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:35:33.293793  242330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-296314 addons enable metrics-server
	
	I1019 17:35:33.319211  242330 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:35:33.320808  242330 api_server.go:141] control plane version: v1.34.1
	I1019 17:35:33.320833  242330 api_server.go:131] duration metric: took 29.944928ms to wait for apiserver health ...
	I1019 17:35:33.320841  242330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:35:33.322999  242330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 17:35:33.325783  242330 addons.go:515] duration metric: took 6.282246294s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 17:35:33.326798  242330 system_pods.go:59] 8 kube-system pods found
	I1019 17:35:33.326838  242330 system_pods.go:61] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:33.326848  242330 system_pods.go:61] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:35:33.326857  242330 system_pods.go:61] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:35:33.326865  242330 system_pods.go:61] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:35:33.326877  242330 system_pods.go:61] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:35:33.326889  242330 system_pods.go:61] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:35:33.326930  242330 system_pods.go:61] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:35:33.326938  242330 system_pods.go:61] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:33.326943  242330 system_pods.go:74] duration metric: took 6.097596ms to wait for pod list to return data ...
	I1019 17:35:33.326951  242330 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:35:33.329951  242330 default_sa.go:45] found service account: "default"
	I1019 17:35:33.329977  242330 default_sa.go:55] duration metric: took 3.020614ms for default service account to be created ...
	I1019 17:35:33.329985  242330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:35:33.333077  242330 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:33.333109  242330 system_pods.go:89] "coredns-66bc5c9577-2xbw2" [2ed769db-2036-4c5d-8e6a-acfc55d1d5f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:33.333118  242330 system_pods.go:89] "etcd-embed-certs-296314" [11dcd214-7861-4bf7-a09e-56c31c62ff7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:35:33.333127  242330 system_pods.go:89] "kindnet-7nwqx" [5844ea2d-de90-4b67-98f7-3794f9f89ce5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:35:33.333135  242330 system_pods.go:89] "kube-apiserver-embed-certs-296314" [1b4e03bb-83bd-4f4c-9e28-5f6edf5074d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:35:33.333143  242330 system_pods.go:89] "kube-controller-manager-embed-certs-296314" [6b705bc0-b601-487d-a0a1-f18532ec16ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:35:33.333150  242330 system_pods.go:89] "kube-proxy-5sj42" [95ffe5ff-ab85-4793-8d88-3389d2efd9b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:35:33.333161  242330 system_pods.go:89] "kube-scheduler-embed-certs-296314" [cb6fc76e-381c-4066-a303-bf07a9c046c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:35:33.333172  242330 system_pods.go:89] "storage-provisioner" [58c446f1-5fc6-41fd-b166-9bc2c8bc198b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:33.333180  242330 system_pods.go:126] duration metric: took 3.188812ms to wait for k8s-apps to be running ...
	I1019 17:35:33.333191  242330 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:35:33.333246  242330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:35:33.347289  242330 system_svc.go:56] duration metric: took 14.089875ms WaitForService to wait for kubelet
	I1019 17:35:33.347319  242330 kubeadm.go:587] duration metric: took 6.304185126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:35:33.347339  242330 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:35:33.352200  242330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:35:33.352235  242330 node_conditions.go:123] node cpu capacity is 2
	I1019 17:35:33.352247  242330 node_conditions.go:105] duration metric: took 4.903438ms to run NodePressure ...
	I1019 17:35:33.352264  242330 start.go:242] waiting for startup goroutines ...
	I1019 17:35:33.352272  242330 start.go:247] waiting for cluster config update ...
	I1019 17:35:33.352283  242330 start.go:256] writing updated cluster config ...
	I1019 17:35:33.352567  242330 ssh_runner.go:195] Run: rm -f paused
	I1019 17:35:33.356379  242330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:35:33.359955  242330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2xbw2" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:35:31.634781  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:33.635045  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:36.134483  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:35.366231  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:37.366406  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:38.134599  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:40.634990  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:39.366789  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:41.865795  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:42.635111  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:44.635535  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:44.365253  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:46.368109  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:47.134492  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:49.634674  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:48.867140  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:51.365490  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:53.368193  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:51.634798  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	W1019 17:35:54.134353  239027 node_ready.go:57] node "default-k8s-diff-port-370596" has "Ready":"False" status (will retry)
	I1019 17:35:55.634393  239027 node_ready.go:49] node "default-k8s-diff-port-370596" is "Ready"
	I1019 17:35:55.634425  239027 node_ready.go:38] duration metric: took 40.50330747s for node "default-k8s-diff-port-370596" to be "Ready" ...
	I1019 17:35:55.634438  239027 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:35:55.634497  239027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:35:55.648675  239027 api_server.go:72] duration metric: took 41.235757861s to wait for apiserver process to appear ...
	I1019 17:35:55.648700  239027 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:35:55.648720  239027 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1019 17:35:55.657197  239027 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1019 17:35:55.658276  239027 api_server.go:141] control plane version: v1.34.1
	I1019 17:35:55.658306  239027 api_server.go:131] duration metric: took 9.599586ms to wait for apiserver health ...
	I1019 17:35:55.658315  239027 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:35:55.663886  239027 system_pods.go:59] 8 kube-system pods found
	I1019 17:35:55.663919  239027 system_pods.go:61] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:55.663938  239027 system_pods.go:61] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:55.663945  239027 system_pods.go:61] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:55.663949  239027 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:55.663954  239027 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:55.663960  239027 system_pods.go:61] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:55.663964  239027 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:55.663969  239027 system_pods.go:61] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:55.663975  239027 system_pods.go:74] duration metric: took 5.655349ms to wait for pod list to return data ...
	I1019 17:35:55.663984  239027 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:35:55.666713  239027 default_sa.go:45] found service account: "default"
	I1019 17:35:55.666743  239027 default_sa.go:55] duration metric: took 2.752893ms for default service account to be created ...
	I1019 17:35:55.666754  239027 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:35:55.670231  239027 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:55.670269  239027 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:55.670277  239027 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:55.670283  239027 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:55.670288  239027 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:55.670293  239027 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:55.670298  239027 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:55.670302  239027 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:55.670309  239027 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:55.670333  239027 retry.go:31] will retry after 275.264275ms: missing components: kube-dns
	I1019 17:35:55.952241  239027 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:55.952338  239027 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:55.952360  239027 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:55.952400  239027 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:55.952422  239027 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:55.952442  239027 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:55.952483  239027 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:55.952506  239027 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:55.952528  239027 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:55.952582  239027 retry.go:31] will retry after 247.742562ms: missing components: kube-dns
	I1019 17:35:56.204508  239027 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:56.204594  239027 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:56.204608  239027 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:56.204617  239027 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:56.204621  239027 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:56.204633  239027 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:56.204639  239027 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:56.204644  239027 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:56.204666  239027 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:56.204687  239027 retry.go:31] will retry after 445.856488ms: missing components: kube-dns
	W1019 17:35:55.865105  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:35:57.865473  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	I1019 17:35:56.653776  239027 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:56.653815  239027 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:35:56.653824  239027 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:56.653830  239027 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:56.653834  239027 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:56.653838  239027 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:56.653842  239027 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:56.653846  239027 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:56.653852  239027 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 17:35:56.653868  239027 retry.go:31] will retry after 543.580366ms: missing components: kube-dns
	I1019 17:35:57.201751  239027 system_pods.go:86] 8 kube-system pods found
	I1019 17:35:57.201786  239027 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Running
	I1019 17:35:57.201813  239027 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:35:57.201819  239027 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:35:57.201824  239027 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running
	I1019 17:35:57.201836  239027 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running
	I1019 17:35:57.201840  239027 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:35:57.201844  239027 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running
	I1019 17:35:57.201852  239027 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Running
	I1019 17:35:57.201860  239027 system_pods.go:126] duration metric: took 1.535100763s to wait for k8s-apps to be running ...
	I1019 17:35:57.201867  239027 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:35:57.201928  239027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:35:57.215451  239027 system_svc.go:56] duration metric: took 13.574402ms WaitForService to wait for kubelet
	I1019 17:35:57.215481  239027 kubeadm.go:587] duration metric: took 42.802568493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:35:57.215500  239027 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:35:57.218521  239027 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:35:57.218636  239027 node_conditions.go:123] node cpu capacity is 2
	I1019 17:35:57.218659  239027 node_conditions.go:105] duration metric: took 3.134231ms to run NodePressure ...
	I1019 17:35:57.218672  239027 start.go:242] waiting for startup goroutines ...
	I1019 17:35:57.218680  239027 start.go:247] waiting for cluster config update ...
	I1019 17:35:57.218703  239027 start.go:256] writing updated cluster config ...
	I1019 17:35:57.219003  239027 ssh_runner.go:195] Run: rm -f paused
	I1019 17:35:57.222485  239027 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:35:57.226269  239027 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vjhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.231484  239027 pod_ready.go:94] pod "coredns-66bc5c9577-vjhwx" is "Ready"
	I1019 17:35:57.231505  239027 pod_ready.go:86] duration metric: took 5.212348ms for pod "coredns-66bc5c9577-vjhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.233984  239027 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.238578  239027 pod_ready.go:94] pod "etcd-default-k8s-diff-port-370596" is "Ready"
	I1019 17:35:57.238602  239027 pod_ready.go:86] duration metric: took 4.593607ms for pod "etcd-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.241172  239027 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.245957  239027 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-370596" is "Ready"
	I1019 17:35:57.245981  239027 pod_ready.go:86] duration metric: took 4.784469ms for pod "kube-apiserver-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.248274  239027 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.627383  239027 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-370596" is "Ready"
	I1019 17:35:57.627457  239027 pod_ready.go:86] duration metric: took 379.1501ms for pod "kube-controller-manager-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:57.827810  239027 pod_ready.go:83] waiting for pod "kube-proxy-24xql" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:58.227047  239027 pod_ready.go:94] pod "kube-proxy-24xql" is "Ready"
	I1019 17:35:58.227076  239027 pod_ready.go:86] duration metric: took 399.240211ms for pod "kube-proxy-24xql" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:58.427578  239027 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:58.826576  239027 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-370596" is "Ready"
	I1019 17:35:58.826602  239027 pod_ready.go:86] duration metric: took 398.979799ms for pod "kube-scheduler-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:35:58.826616  239027 pod_ready.go:40] duration metric: took 1.604095245s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:35:58.894844  239027 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:35:58.898257  239027 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-370596" cluster and "default" namespace by default
	W1019 17:36:00.381467  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:36:02.865979  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:36:04.867146  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	W1019 17:36:07.365468  242330 pod_ready.go:104] pod "coredns-66bc5c9577-2xbw2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 17:35:55 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:55.956061259Z" level=info msg="Starting container: d9bc4cb2b4bf334feab26a56933e07bd3203411a5015122b0302dfd63e6df267" id=0ee68303-fe58-4dd6-8dac-e24d5ac22c49 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:35:55 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:55.956427303Z" level=info msg="Started container" PID=1728 containerID=1ad15d3ccc440b17d9f517c67d79efc2e84aba3f66b43451746a5d8f08d37e46 description=kube-system/storage-provisioner/storage-provisioner id=ae8b0560-4fa9-4c4b-b422-656370173a7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b81b629eed3f055b1f570c1feb3139b2138ccfad1a1e7453ec3974a23ab4ab16
	Oct 19 17:35:55 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:55.958292207Z" level=info msg="Started container" PID=1733 containerID=d9bc4cb2b4bf334feab26a56933e07bd3203411a5015122b0302dfd63e6df267 description=kube-system/coredns-66bc5c9577-vjhwx/coredns id=0ee68303-fe58-4dd6-8dac-e24d5ac22c49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2e26d33b8d81f9ac0b19e05ce9fa689bb9302747f6d668e87fa328c73231313
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.436360443Z" level=info msg="Running pod sandbox: default/busybox/POD" id=80c6bac8-bd6f-4c44-ac29-f211ef7f5068 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.436438393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.442445812Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a555948b50d3d6c327f83166e760fe4886260c3fb3758aee894fd71ad5852b3c UID:fde11acc-3723-4708-bdc8-173c2bf1233d NetNS:/var/run/netns/56594c22-3482-40b4-84ea-8ceb171a6bf5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d360}] Aliases:map[]}"
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.442481423Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.457010128Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a555948b50d3d6c327f83166e760fe4886260c3fb3758aee894fd71ad5852b3c UID:fde11acc-3723-4708-bdc8-173c2bf1233d NetNS:/var/run/netns/56594c22-3482-40b4-84ea-8ceb171a6bf5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d360}] Aliases:map[]}"
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.457168907Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.460483506Z" level=info msg="Ran pod sandbox a555948b50d3d6c327f83166e760fe4886260c3fb3758aee894fd71ad5852b3c with infra container: default/busybox/POD" id=80c6bac8-bd6f-4c44-ac29-f211ef7f5068 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.462060832Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2171cb0-b694-4f1c-8d2e-d33353df44a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.462184658Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f2171cb0-b694-4f1c-8d2e-d33353df44a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.462220375Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f2171cb0-b694-4f1c-8d2e-d33353df44a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.463231433Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68402e99-2199-47c4-86a4-7ecf9b30cf23 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:35:59 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:35:59.465845804Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.55081867Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=68402e99-2199-47c4-86a4-7ecf9b30cf23 name=/runtime.v1.ImageService/PullImage
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.551536538Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d885cdbc-4dd8-4533-97a2-690dff7ed7b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.555053108Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=70f694a4-1262-440d-b6ca-e5954bff0099 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.560838609Z" level=info msg="Creating container: default/busybox/busybox" id=31a7ecf8-8fd0-4005-8dcd-fbb4b207146c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.56164086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.566303416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.567130234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.584012768Z" level=info msg="Created container 98cfb5009f95ad5a2f4b3ba9c528fccb342f01eaf2ae300260b99aa86f019ab6: default/busybox/busybox" id=31a7ecf8-8fd0-4005-8dcd-fbb4b207146c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.58686226Z" level=info msg="Starting container: 98cfb5009f95ad5a2f4b3ba9c528fccb342f01eaf2ae300260b99aa86f019ab6" id=b393b7b3-d91b-4083-a067-32c69f0872c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:36:01 default-k8s-diff-port-370596 crio[837]: time="2025-10-19T17:36:01.590229191Z" level=info msg="Started container" PID=1795 containerID=98cfb5009f95ad5a2f4b3ba9c528fccb342f01eaf2ae300260b99aa86f019ab6 description=default/busybox/busybox id=b393b7b3-d91b-4083-a067-32c69f0872c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a555948b50d3d6c327f83166e760fe4886260c3fb3758aee894fd71ad5852b3c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	98cfb5009f95a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   a555948b50d3d       busybox                                                default
	d9bc4cb2b4bf3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   f2e26d33b8d81       coredns-66bc5c9577-vjhwx                               kube-system
	1ad15d3ccc440       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   b81b629eed3f0       storage-provisioner                                    kube-system
	b8879afd70d88       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   7826e769935b1       kindnet-6xvl9                                          kube-system
	ea41e15018665       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   44a4e3250fe90       kube-proxy-24xql                                       kube-system
	e6f8f0f4f1ef6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   35275f6e61c2a       etcd-default-k8s-diff-port-370596                      kube-system
	78016ab4ec72f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   07847c56af474       kube-controller-manager-default-k8s-diff-port-370596   kube-system
	1123199df37e6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   885f24c0972fb       kube-apiserver-default-k8s-diff-port-370596            kube-system
	680f8b1066eac       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   a15706a126284       kube-scheduler-default-k8s-diff-port-370596            kube-system
	
	
	==> coredns [d9bc4cb2b4bf334feab26a56933e07bd3203411a5015122b0302dfd63e6df267] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60025 - 22698 "HINFO IN 8061900082773761462.4678589293803805592. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030105835s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-370596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-370596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-370596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_35_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:35:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-370596
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:36:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:35:55 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:35:55 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:35:55 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:35:55 +0000   Sun, 19 Oct 2025 17:35:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-370596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e51b66e9-2b10-4f4c-b9ea-b7f9cb5ec8fe
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-vjhwx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-370596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-6xvl9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-370596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-370596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-24xql                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-370596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node default-k8s-diff-port-370596 event: Registered Node default-k8s-diff-port-370596 in Controller
	  Normal   NodeReady                14s   kubelet          Node default-k8s-diff-port-370596 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 17:12] overlayfs: idmapped layers are currently not supported
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e6f8f0f4f1ef687b34346d4feba41e14852cae35b82b4edecfc9a87f1828019a] <==
	{"level":"warn","ts":"2025-10-19T17:35:03.397234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.428502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.453233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.504357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.559708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.637249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.735290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.738912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.771721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.864082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.893776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:03.943686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.058850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.093366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.135723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.177942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.219810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.244314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.273720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.303057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.342038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.389383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.426745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.465588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:04.640228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33018","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:36:09 up  1:18,  0 user,  load average: 3.36, 3.82, 3.47
	Linux default-k8s-diff-port-370596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8879afd70d88584780744c670e27172576834298f18b245526773b61392e2a3] <==
	I1019 17:35:15.096652       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:35:15.096928       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:35:15.097082       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:35:15.097094       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:35:15.097108       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:35:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:35:15.304704       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:35:15.304794       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:35:15.304830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:35:15.305634       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:35:45.306173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:35:45.306414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:35:45.306595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:35:45.306708       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 17:35:46.605708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:35:46.605745       1 metrics.go:72] Registering metrics
	I1019 17:35:46.605825       1 controller.go:711] "Syncing nftables rules"
	I1019 17:35:55.311480       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:35:55.311536       1 main.go:301] handling current node
	I1019 17:36:05.304676       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:36:05.304805       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1123199df37e6a04bc161d5824224f950b30283eb363a8ec06cd7f2f2bf28041] <==
	I1019 17:35:06.196545       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:35:06.196807       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:35:06.212363       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:35:06.212600       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:35:06.268141       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:35:06.268881       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:35:06.815618       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:35:06.824762       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:35:06.824781       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:35:07.521362       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:35:07.612570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:35:07.765398       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:35:07.783025       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 17:35:07.784842       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:35:07.799087       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:35:08.066073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:35:08.673384       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:35:08.690233       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:35:08.700098       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:35:13.769051       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:35:13.769156       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 17:35:13.926287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:35:14.120771       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:35:14.125512       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 17:36:08.269120       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:32774: use of closed network connection
	
	
	==> kube-controller-manager [78016ab4ec72f4d40028b8108d2bd32802b40fbdd204b74261fed9b6932807df] <==
	I1019 17:35:13.084146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:35:13.091158       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:35:13.094763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:35:13.100901       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:35:13.110654       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:35:13.111810       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:35:13.113007       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:35:13.113054       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:35:13.113122       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:35:13.113171       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:35:13.113266       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:35:13.113034       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:35:13.113558       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:35:13.113584       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:35:13.113020       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:35:13.113606       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:35:13.116845       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:35:13.120134       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 17:35:13.120195       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 17:35:13.120240       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:35:13.120249       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:35:13.120259       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:35:13.129043       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-370596" podCIDRs=["10.244.0.0/24"]
	I1019 17:35:13.129234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:35:58.072324       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ea41e150186659bfb68bbcb79d5342ef28a2395514e3f07d102b78b13bf2dbde] <==
	I1019 17:35:14.945168       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:35:15.048356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:35:15.155814       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:35:15.155859       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:35:15.155941       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:35:15.271612       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:35:15.271726       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:35:15.278698       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:35:15.279095       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:35:15.279332       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:35:15.280627       1 config.go:200] "Starting service config controller"
	I1019 17:35:15.280696       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:35:15.280741       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:35:15.280768       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:35:15.280817       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:35:15.280847       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:35:15.281574       1 config.go:309] "Starting node config controller"
	I1019 17:35:15.285168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:35:15.285242       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:35:15.381207       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:35:15.381244       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:35:15.381287       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [680f8b1066eac02d7082e3591f61dbae24f8d7074dd812d94d3c063a69cfe490] <==
	I1019 17:35:06.480214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:35:06.480416       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:06.486040       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 17:35:06.495095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:35:06.495252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:35:06.495357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 17:35:06.495412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1019 17:35:06.480753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 17:35:06.509299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:35:06.509377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:35:06.509440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 17:35:06.513428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:35:06.516356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:35:06.516542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:35:06.516659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:35:06.516702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:35:06.516807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:35:06.518397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:35:06.518443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:35:06.529014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:35:06.529268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:35:06.529336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:35:06.529429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:35:07.512601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 17:35:09.687125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:13.848905    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe5d7c3b-6719-434c-acc5-8a85ea0f703a-lib-modules\") pod \"kube-proxy-24xql\" (UID: \"fe5d7c3b-6719-434c-acc5-8a85ea0f703a\") " pod="kube-system/kube-proxy-24xql"
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:13.848924    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9x6\" (UniqueName: \"kubernetes.io/projected/fe5d7c3b-6719-434c-acc5-8a85ea0f703a-kube-api-access-jm9x6\") pod \"kube-proxy-24xql\" (UID: \"fe5d7c3b-6719-434c-acc5-8a85ea0f703a\") " pod="kube-system/kube-proxy-24xql"
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:13.848946    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe5d7c3b-6719-434c-acc5-8a85ea0f703a-kube-proxy\") pod \"kube-proxy-24xql\" (UID: \"fe5d7c3b-6719-434c-acc5-8a85ea0f703a\") " pod="kube-system/kube-proxy-24xql"
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:13.849020    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dfab6e1-f690-4a7c-8b62-87160d9a8971-xtables-lock\") pod \"kindnet-6xvl9\" (UID: \"5dfab6e1-f690-4a7c-8b62-87160d9a8971\") " pod="kube-system/kindnet-6xvl9"
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:13.849083    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqszr\" (UniqueName: \"kubernetes.io/projected/5dfab6e1-f690-4a7c-8b62-87160d9a8971-kube-api-access-kqszr\") pod \"kindnet-6xvl9\" (UID: \"5dfab6e1-f690-4a7c-8b62-87160d9a8971\") " pod="kube-system/kindnet-6xvl9"
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.971824    1305 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.972016    1305 projected.go:196] Error preparing data for projected volume kube-api-access-jm9x6 for pod kube-system/kube-proxy-24xql: configmap "kube-root-ca.crt" not found
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.972194    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fe5d7c3b-6719-434c-acc5-8a85ea0f703a-kube-api-access-jm9x6 podName:fe5d7c3b-6719-434c-acc5-8a85ea0f703a nodeName:}" failed. No retries permitted until 2025-10-19 17:35:14.472152697 +0000 UTC m=+5.978679377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jm9x6" (UniqueName: "kubernetes.io/projected/fe5d7c3b-6719-434c-acc5-8a85ea0f703a-kube-api-access-jm9x6") pod "kube-proxy-24xql" (UID: "fe5d7c3b-6719-434c-acc5-8a85ea0f703a") : configmap "kube-root-ca.crt" not found
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.974172    1305 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.974336    1305 projected.go:196] Error preparing data for projected volume kube-api-access-kqszr for pod kube-system/kindnet-6xvl9: configmap "kube-root-ca.crt" not found
	Oct 19 17:35:13 default-k8s-diff-port-370596 kubelet[1305]: E1019 17:35:13.974478    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfab6e1-f690-4a7c-8b62-87160d9a8971-kube-api-access-kqszr podName:5dfab6e1-f690-4a7c-8b62-87160d9a8971 nodeName:}" failed. No retries permitted until 2025-10-19 17:35:14.474456172 +0000 UTC m=+5.980982860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kqszr" (UniqueName: "kubernetes.io/projected/5dfab6e1-f690-4a7c-8b62-87160d9a8971-kube-api-access-kqszr") pod "kindnet-6xvl9" (UID: "5dfab6e1-f690-4a7c-8b62-87160d9a8971") : configmap "kube-root-ca.crt" not found
	Oct 19 17:35:14 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:14.556888    1305 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:35:14 default-k8s-diff-port-370596 kubelet[1305]: W1019 17:35:14.725610    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-44a4e3250fe908c9acdbac36410579d267c553e62f415b6765b2b7345f255ae7 WatchSource:0}: Error finding container 44a4e3250fe908c9acdbac36410579d267c553e62f415b6765b2b7345f255ae7: Status 404 returned error can't find the container with id 44a4e3250fe908c9acdbac36410579d267c553e62f415b6765b2b7345f255ae7
	Oct 19 17:35:14 default-k8s-diff-port-370596 kubelet[1305]: W1019 17:35:14.754577    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-7826e769935b1ba6fb81c09d59575d799165fc98c538710a0a6fb0c25ec4280d WatchSource:0}: Error finding container 7826e769935b1ba6fb81c09d59575d799165fc98c538710a0a6fb0c25ec4280d: Status 404 returned error can't find the container with id 7826e769935b1ba6fb81c09d59575d799165fc98c538710a0a6fb0c25ec4280d
	Oct 19 17:35:15 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:15.778156    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-24xql" podStartSLOduration=2.778136142 podStartE2EDuration="2.778136142s" podCreationTimestamp="2025-10-19 17:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:35:15.760534427 +0000 UTC m=+7.267061123" watchObservedRunningTime="2025-10-19 17:35:15.778136142 +0000 UTC m=+7.284662830"
	Oct 19 17:35:15 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:15.778287    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6xvl9" podStartSLOduration=2.7782811450000002 podStartE2EDuration="2.778281145s" podCreationTimestamp="2025-10-19 17:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:35:15.777965268 +0000 UTC m=+7.284491972" watchObservedRunningTime="2025-10-19 17:35:15.778281145 +0000 UTC m=+7.284807833"
	Oct 19 17:35:55 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:55.527639    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 17:35:55 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:55.663880    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/157cf698-27a7-446b-9122-e046c021a004-tmp\") pod \"storage-provisioner\" (UID: \"157cf698-27a7-446b-9122-e046c021a004\") " pod="kube-system/storage-provisioner"
	Oct 19 17:35:55 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:55.663950    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwq7\" (UniqueName: \"kubernetes.io/projected/157cf698-27a7-446b-9122-e046c021a004-kube-api-access-qhwq7\") pod \"storage-provisioner\" (UID: \"157cf698-27a7-446b-9122-e046c021a004\") " pod="kube-system/storage-provisioner"
	Oct 19 17:35:55 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:55.663994    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28906e96-8f1a-4fa8-94fd-78e3c3892116-config-volume\") pod \"coredns-66bc5c9577-vjhwx\" (UID: \"28906e96-8f1a-4fa8-94fd-78e3c3892116\") " pod="kube-system/coredns-66bc5c9577-vjhwx"
	Oct 19 17:35:55 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:55.664018    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlv8s\" (UniqueName: \"kubernetes.io/projected/28906e96-8f1a-4fa8-94fd-78e3c3892116-kube-api-access-qlv8s\") pod \"coredns-66bc5c9577-vjhwx\" (UID: \"28906e96-8f1a-4fa8-94fd-78e3c3892116\") " pod="kube-system/coredns-66bc5c9577-vjhwx"
	Oct 19 17:35:56 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:56.933806    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.933785796 podStartE2EDuration="41.933785796s" podCreationTimestamp="2025-10-19 17:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:35:56.920658194 +0000 UTC m=+48.427184882" watchObservedRunningTime="2025-10-19 17:35:56.933785796 +0000 UTC m=+48.440312476"
	Oct 19 17:35:59 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:59.124496    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vjhwx" podStartSLOduration=45.124477968 podStartE2EDuration="45.124477968s" podCreationTimestamp="2025-10-19 17:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:35:56.93419902 +0000 UTC m=+48.440725708" watchObservedRunningTime="2025-10-19 17:35:59.124477968 +0000 UTC m=+50.631004656"
	Oct 19 17:35:59 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:35:59.185755    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fr89\" (UniqueName: \"kubernetes.io/projected/fde11acc-3723-4708-bdc8-173c2bf1233d-kube-api-access-5fr89\") pod \"busybox\" (UID: \"fde11acc-3723-4708-bdc8-173c2bf1233d\") " pod="default/busybox"
	Oct 19 17:36:01 default-k8s-diff-port-370596 kubelet[1305]: I1019 17:36:01.930867    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.840936027 podStartE2EDuration="2.930847546s" podCreationTimestamp="2025-10-19 17:35:59 +0000 UTC" firstStartedPulling="2025-10-19 17:35:59.462525356 +0000 UTC m=+50.969052044" lastFinishedPulling="2025-10-19 17:36:01.552436883 +0000 UTC m=+53.058963563" observedRunningTime="2025-10-19 17:36:01.930674949 +0000 UTC m=+53.437201629" watchObservedRunningTime="2025-10-19 17:36:01.930847546 +0000 UTC m=+53.437374226"
	
	
	==> storage-provisioner [1ad15d3ccc440b17d9f517c67d79efc2e84aba3f66b43451746a5d8f08d37e46] <==
	I1019 17:35:55.981610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:35:56.022454       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:35:56.022607       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:35:56.027402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:56.035504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:35:56.035764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:35:56.035976       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_ec050413-5fe9-4051-a54e-7719f8f32f99!
	I1019 17:35:56.037161       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1c4cfdf-cdef-4239-ba06-3720ec0343a4", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-370596_ec050413-5fe9-4051-a54e-7719f8f32f99 became leader
	W1019 17:35:56.042733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:56.051383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:35:56.136873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_ec050413-5fe9-4051-a54e-7719f8f32f99!
	W1019 17:35:58.054708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:35:58.062031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:00.103472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:00.151586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:02.154723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:02.161826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:04.165103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:04.169590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:06.173270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:06.178143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:08.182512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:08.188155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:10.193524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:10.199191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-296314 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-296314 --alsologtostderr -v=1: exit status 80 (1.999267516s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-296314 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:36:29.204435  246325 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:36:29.204663  246325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:29.204692  246325 out.go:374] Setting ErrFile to fd 2...
	I1019 17:36:29.204710  246325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:29.205044  246325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:36:29.205333  246325 out.go:368] Setting JSON to false
	I1019 17:36:29.205379  246325 mustload.go:66] Loading cluster: embed-certs-296314
	I1019 17:36:29.205901  246325 config.go:182] Loaded profile config "embed-certs-296314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:29.206470  246325 cli_runner.go:164] Run: docker container inspect embed-certs-296314 --format={{.State.Status}}
	I1019 17:36:29.236244  246325 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:36:29.236559  246325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:29.319124  246325 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 17:36:29.309607819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:29.319999  246325 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-296314 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:36:29.323525  246325 out.go:179] * Pausing node embed-certs-296314 ... 
	I1019 17:36:29.326336  246325 host.go:66] Checking if "embed-certs-296314" exists ...
	I1019 17:36:29.326815  246325 ssh_runner.go:195] Run: systemctl --version
	I1019 17:36:29.326865  246325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-296314
	I1019 17:36:29.350291  246325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/embed-certs-296314/id_rsa Username:docker}
	I1019 17:36:29.456931  246325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:36:29.492401  246325 pause.go:52] kubelet running: true
	I1019 17:36:29.492464  246325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:36:29.805953  246325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:36:29.806031  246325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:36:29.913741  246325 cri.go:89] found id: "b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07"
	I1019 17:36:29.913758  246325 cri.go:89] found id: "d5351136756eab8472ddeeb973620be9c36bf0fe3334b6702fa621c82598d70b"
	I1019 17:36:29.913763  246325 cri.go:89] found id: "2b961a279052eaef38f32facfd740a3beaeef53423104d9f42c20da1ee788acd"
	I1019 17:36:29.913767  246325 cri.go:89] found id: "2ce143425275aa97757397246ab5e496dea31d6212964223b640c12d73d5bd87"
	I1019 17:36:29.913770  246325 cri.go:89] found id: "93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e"
	I1019 17:36:29.913774  246325 cri.go:89] found id: "419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b"
	I1019 17:36:29.913788  246325 cri.go:89] found id: "f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431"
	I1019 17:36:29.913791  246325 cri.go:89] found id: "601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab"
	I1019 17:36:29.913794  246325 cri.go:89] found id: "1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e"
	I1019 17:36:29.913800  246325 cri.go:89] found id: "a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	I1019 17:36:29.913803  246325 cri.go:89] found id: "98f40f985abe20795cb17701cf451856590428d95c58119f4bb35737e7c3454c"
	I1019 17:36:29.913806  246325 cri.go:89] found id: ""
	I1019 17:36:29.913851  246325 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:36:29.926003  246325 retry.go:31] will retry after 147.737729ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:29Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:36:30.074672  246325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:36:30.095084  246325 pause.go:52] kubelet running: false
	I1019 17:36:30.095184  246325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:36:30.324416  246325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:36:30.324507  246325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:36:30.417682  246325 cri.go:89] found id: "b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07"
	I1019 17:36:30.417703  246325 cri.go:89] found id: "d5351136756eab8472ddeeb973620be9c36bf0fe3334b6702fa621c82598d70b"
	I1019 17:36:30.417708  246325 cri.go:89] found id: "2b961a279052eaef38f32facfd740a3beaeef53423104d9f42c20da1ee788acd"
	I1019 17:36:30.417712  246325 cri.go:89] found id: "2ce143425275aa97757397246ab5e496dea31d6212964223b640c12d73d5bd87"
	I1019 17:36:30.417715  246325 cri.go:89] found id: "93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e"
	I1019 17:36:30.417719  246325 cri.go:89] found id: "419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b"
	I1019 17:36:30.417723  246325 cri.go:89] found id: "f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431"
	I1019 17:36:30.417726  246325 cri.go:89] found id: "601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab"
	I1019 17:36:30.417729  246325 cri.go:89] found id: "1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e"
	I1019 17:36:30.417735  246325 cri.go:89] found id: "a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	I1019 17:36:30.417739  246325 cri.go:89] found id: "98f40f985abe20795cb17701cf451856590428d95c58119f4bb35737e7c3454c"
	I1019 17:36:30.417742  246325 cri.go:89] found id: ""
	I1019 17:36:30.417789  246325 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:36:30.433045  246325 retry.go:31] will retry after 263.9534ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:30Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:36:30.697527  246325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:36:30.711873  246325 pause.go:52] kubelet running: false
	I1019 17:36:30.711939  246325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:36:30.975111  246325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:36:30.975181  246325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:36:31.105075  246325 cri.go:89] found id: "b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07"
	I1019 17:36:31.105102  246325 cri.go:89] found id: "d5351136756eab8472ddeeb973620be9c36bf0fe3334b6702fa621c82598d70b"
	I1019 17:36:31.105108  246325 cri.go:89] found id: "2b961a279052eaef38f32facfd740a3beaeef53423104d9f42c20da1ee788acd"
	I1019 17:36:31.105112  246325 cri.go:89] found id: "2ce143425275aa97757397246ab5e496dea31d6212964223b640c12d73d5bd87"
	I1019 17:36:31.105116  246325 cri.go:89] found id: "93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e"
	I1019 17:36:31.105120  246325 cri.go:89] found id: "419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b"
	I1019 17:36:31.105123  246325 cri.go:89] found id: "f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431"
	I1019 17:36:31.105127  246325 cri.go:89] found id: "601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab"
	I1019 17:36:31.105131  246325 cri.go:89] found id: "1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e"
	I1019 17:36:31.105137  246325 cri.go:89] found id: "a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	I1019 17:36:31.105141  246325 cri.go:89] found id: "98f40f985abe20795cb17701cf451856590428d95c58119f4bb35737e7c3454c"
	I1019 17:36:31.105145  246325 cri.go:89] found id: ""
	I1019 17:36:31.105206  246325 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:36:31.122027  246325 out.go:203] 
	W1019 17:36:31.124978  246325 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:36:31.125004  246325 out.go:285] * 
	* 
	W1019 17:36:31.129966  246325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:36:31.133089  246325 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-296314 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-296314
helpers_test.go:243: (dbg) docker inspect embed-certs-296314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	        "Created": "2025-10-19T17:33:35.165314955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:35:18.82347683Z",
	            "FinishedAt": "2025-10-19T17:35:17.947438069Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hosts",
	        "LogPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d-json.log",
	        "Name": "/embed-certs-296314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-296314:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-296314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	                "LowerDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-296314",
	                "Source": "/var/lib/docker/volumes/embed-certs-296314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-296314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-296314",
	                "name.minikube.sigs.k8s.io": "embed-certs-296314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd708428eb1fb90ebe3090945eb274eacb194a4ba95e86919142265c2928f213",
	            "SandboxKey": "/var/run/docker/netns/bd708428eb1f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-296314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d7:68:58:69:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b85768c3935a46e7e3c1643ba28d42a950563959f3252b2b534926365c369610",
	                    "EndpointID": "f2cce0ca8c5d56e9157e77255a0811d2a144668cd996a91e1f99ac70d6e43204",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-296314",
	                        "5854ebe0a2d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314: exit status 2 (513.033917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25: (2.108290199s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                              │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                          │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                               │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                              │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                     │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                     │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                          │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                             │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                              │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                             │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:36:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:36:23.056552  245420 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:36:23.056740  245420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:23.056773  245420 out.go:374] Setting ErrFile to fd 2...
	I1019 17:36:23.056793  245420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:23.057557  245420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:36:23.058029  245420 out.go:368] Setting JSON to false
	I1019 17:36:23.059087  245420 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4731,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:36:23.059167  245420 start.go:143] virtualization:  
	I1019 17:36:23.062741  245420 out.go:179] * [default-k8s-diff-port-370596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:36:23.066560  245420 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:36:23.066684  245420 notify.go:221] Checking for updates...
	I1019 17:36:23.072593  245420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:36:23.075570  245420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:36:23.078323  245420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:36:23.081292  245420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:36:23.084155  245420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:36:23.087461  245420 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:23.088147  245420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:36:23.114962  245420 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:36:23.115082  245420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:23.189771  245420 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:36:23.180539878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:23.189878  245420 docker.go:319] overlay module found
	I1019 17:36:23.192966  245420 out.go:179] * Using the docker driver based on existing profile
	I1019 17:36:23.195796  245420 start.go:309] selected driver: docker
	I1019 17:36:23.195816  245420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:23.195919  245420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:36:23.196632  245420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:23.257313  245420 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:36:23.248194418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:23.257656  245420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:36:23.257696  245420 cni.go:84] Creating CNI manager for ""
	I1019 17:36:23.257752  245420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:36:23.257788  245420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:23.262798  245420 out.go:179] * Starting "default-k8s-diff-port-370596" primary control-plane node in "default-k8s-diff-port-370596" cluster
	I1019 17:36:23.265725  245420 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:36:23.268675  245420 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:36:23.271401  245420 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:23.271469  245420 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:36:23.271484  245420 cache.go:59] Caching tarball of preloaded images
	I1019 17:36:23.271489  245420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:36:23.271599  245420 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:36:23.271611  245420 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:36:23.271742  245420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:36:23.292279  245420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:36:23.292303  245420 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:36:23.292317  245420 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:36:23.292340  245420 start.go:360] acquireMachinesLock for default-k8s-diff-port-370596: {Name:mk4e5a46aec1705453bccb79fee591d547fbb19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:36:23.292404  245420 start.go:364] duration metric: took 33.166µs to acquireMachinesLock for "default-k8s-diff-port-370596"
	I1019 17:36:23.292426  245420 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:36:23.292433  245420 fix.go:54] fixHost starting: 
	I1019 17:36:23.292680  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:23.308913  245420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-370596: state=Stopped err=<nil>
	W1019 17:36:23.308955  245420 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:36:23.312189  245420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-370596" ...
	I1019 17:36:23.312268  245420 cli_runner.go:164] Run: docker start default-k8s-diff-port-370596
	I1019 17:36:23.548713  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:23.578694  245420 kic.go:430] container "default-k8s-diff-port-370596" state is running.
	I1019 17:36:23.579078  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:23.601908  245420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:36:23.602140  245420 machine.go:94] provisionDockerMachine start ...
	I1019 17:36:23.602206  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:23.628214  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:23.628539  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:23.628554  245420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:36:23.629167  245420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:36:26.778112  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:36:26.778192  245420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-370596"
	I1019 17:36:26.778282  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:26.796187  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:26.796565  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:26.796582  245420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-370596 && echo "default-k8s-diff-port-370596" | sudo tee /etc/hostname
	I1019 17:36:26.955800  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:36:26.955877  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:26.974068  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:26.974376  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:26.974397  245420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-370596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-370596/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-370596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:36:27.126724  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:36:27.126751  245420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:36:27.126770  245420 ubuntu.go:190] setting up certificates
	I1019 17:36:27.126779  245420 provision.go:84] configureAuth start
	I1019 17:36:27.126839  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:27.145022  245420 provision.go:143] copyHostCerts
	I1019 17:36:27.145091  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:36:27.145113  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:36:27.145190  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:36:27.145331  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:36:27.145343  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:36:27.145371  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:36:27.145427  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:36:27.145434  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:36:27.145458  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:36:27.145511  245420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-370596 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-370596 localhost minikube]
	I1019 17:36:27.844619  245420 provision.go:177] copyRemoteCerts
	I1019 17:36:27.844685  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:36:27.844732  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:27.863019  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:27.966558  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:36:27.985775  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:36:28.006801  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:36:28.026951  245420 provision.go:87] duration metric: took 900.158192ms to configureAuth
	I1019 17:36:28.026983  245420 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:36:28.027219  245420 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:28.027357  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.045805  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:28.046130  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:28.046154  245420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:36:28.358453  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:36:28.358572  245420 machine.go:97] duration metric: took 4.756416202s to provisionDockerMachine
	I1019 17:36:28.358604  245420 start.go:293] postStartSetup for "default-k8s-diff-port-370596" (driver="docker")
	I1019 17:36:28.358629  245420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:36:28.358729  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:36:28.358810  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.380768  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.494686  245420 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:36:28.498206  245420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:36:28.498281  245420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:36:28.498301  245420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:36:28.498368  245420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:36:28.498446  245420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:36:28.498603  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:36:28.506199  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:36:28.524813  245420 start.go:296] duration metric: took 166.179375ms for postStartSetup
	I1019 17:36:28.524919  245420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:36:28.524974  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.542002  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.644044  245420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:36:28.649145  245420 fix.go:56] duration metric: took 5.356705609s for fixHost
	I1019 17:36:28.649171  245420 start.go:83] releasing machines lock for "default-k8s-diff-port-370596", held for 5.356755955s
	I1019 17:36:28.649268  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:28.666520  245420 ssh_runner.go:195] Run: cat /version.json
	I1019 17:36:28.666721  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.666638  245420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:36:28.666834  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.685731  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.686937  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.790477  245420 ssh_runner.go:195] Run: systemctl --version
	I1019 17:36:28.891140  245420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:36:28.943386  245420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:36:28.949395  245420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:36:28.949477  245420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:36:28.964133  245420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:36:28.964158  245420 start.go:496] detecting cgroup driver to use...
	I1019 17:36:28.964195  245420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:36:28.964242  245420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:36:28.985553  245420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:36:29.004136  245420 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:36:29.004206  245420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:36:29.022374  245420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:36:29.037217  245420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:36:29.186372  245420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:36:29.351871  245420 docker.go:234] disabling docker service ...
	I1019 17:36:29.351932  245420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:36:29.372497  245420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:36:29.391821  245420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:36:29.534143  245420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:36:29.720579  245420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:36:29.738685  245420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:36:29.754628  245420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:36:29.754703  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.764798  245420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:36:29.764910  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.774210  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.783621  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.793726  245420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:36:29.802122  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.817657  245420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.829977  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.839710  245420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:36:29.850341  245420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:36:29.857869  245420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:36:29.997259  245420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:36:30.236206  245420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:36:30.236351  245420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:36:30.241014  245420 start.go:564] Will wait 60s for crictl version
	I1019 17:36:30.241176  245420 ssh_runner.go:195] Run: which crictl
	I1019 17:36:30.246312  245420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:36:30.282620  245420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:36:30.282754  245420 ssh_runner.go:195] Run: crio --version
	I1019 17:36:30.319012  245420 ssh_runner.go:195] Run: crio --version
	I1019 17:36:30.364348  245420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.513189927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9b9623b8-2269-42ee-8264-478776f3915e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.514660536Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3ea585e5-4134-4a29-bdc9-e6068c43eef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.514923449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526397907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526859362Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1bfe1e42b365b033f65ebcbee5f2676b4c4c61dc4a96f433b4a094bfa5328753/merged/etc/passwd: no such file or directory"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526905861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1bfe1e42b365b033f65ebcbee5f2676b4c4c61dc4a96f433b4a094bfa5328753/merged/etc/group: no such file or directory"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.527395812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.548080976Z" level=info msg="Created container b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07: kube-system/storage-provisioner/storage-provisioner" id=3ea585e5-4134-4a29-bdc9-e6068c43eef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.549071882Z" level=info msg="Starting container: b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07" id=38c31fd8-5c14-4e8b-864c-aebb056c4bdd name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.551005948Z" level=info msg="Started container" PID=1641 containerID=b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07 description=kube-system/storage-provisioner/storage-provisioner id=38c31fd8-5c14-4e8b-864c-aebb056c4bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a3de4d93e94448df972025a2e807ab8264a28a8cead47f4a57435893fe2c2d0
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.202665965Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206274278Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206308617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206330878Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209420332Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209462778Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209482068Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.212854628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.212888253Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.21291524Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216673734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216708335Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216729808Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.219580063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.21961103Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b8323b93b0c18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   8a3de4d93e944       storage-provisioner                          kube-system
	a03a9a22e4c9c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           36 seconds ago       Exited              dashboard-metrics-scraper   2                   c7b3e97cc73fa       dashboard-metrics-scraper-6ffb444bf9-sz9f5   kubernetes-dashboard
	98f40f985abe2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   dd758cfa5e40b       kubernetes-dashboard-855c9754f9-qqbvj        kubernetes-dashboard
	d5351136756ea       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   5a338b7f5cc39       coredns-66bc5c9577-2xbw2                     kube-system
	7ae94595875e7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   2656135bd177b       busybox                                      default
	2b961a279052e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   bf45195b73c05       kube-proxy-5sj42                             kube-system
	2ce143425275a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   d847e6abcaeeb       kindnet-7nwqx                                kube-system
	93e57ed7f8473       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   8a3de4d93e944       storage-provisioner                          kube-system
	419c95753ba61       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   58d0af3067288       kube-controller-manager-embed-certs-296314   kube-system
	f1ebcf0400230       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0ece62fda443c       kube-scheduler-embed-certs-296314            kube-system
	601d05c29e65e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c692e82082920       kube-apiserver-embed-certs-296314            kube-system
	1b872d3de58c8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c13564e4e2791       etcd-embed-certs-296314                      kube-system
	
	
	==> coredns [d5351136756eab8472ddeeb973620be9c36bf0fe3334b6702fa621c82598d70b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50762 - 495 "HINFO IN 1850847966072531663.994089535842801053. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.003980045s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-296314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-296314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-296314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_34_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-296314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:36:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:34:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-296314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d8253982-2ff8-43b9-b6f4-cc698577d51f
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-2xbw2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-296314                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-7nwqx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-embed-certs-296314             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-296314    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-5sj42                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-embed-certs-296314             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sz9f5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qqbvj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m37s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m37s)  kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m37s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-296314 event: Registered Node embed-certs-296314 in Controller
	  Normal   NodeReady                102s                   kubelet          Node embed-certs-296314 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-296314 event: Registered Node embed-certs-296314 in Controller
	
	
	==> dmesg <==
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e] <==
	{"level":"warn","ts":"2025-10-19T17:35:30.040775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.111931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.149920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.168796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.200503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.223319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.239588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.261221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.273801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.292608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.309632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.339113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.359098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.373937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.398132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.417281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.448154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.467527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.489906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.506590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.538475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.570509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.610133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.657343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.770509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:36:33 up  1:19,  0 user,  load average: 2.93, 3.68, 3.43
	Linux embed-certs-296314 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ce143425275aa97757397246ab5e496dea31d6212964223b640c12d73d5bd87] <==
	I1019 17:35:32.956790       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:35:32.996448       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:35:33.002817       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:35:33.002849       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:35:33.002869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:35:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:35:33.209580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:35:33.209793       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:35:33.209840       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:35:33.211189       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:36:03.197584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:36:03.211159       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:36:03.211356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:36:03.211176       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 17:36:04.510088       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:36:04.510189       1 metrics.go:72] Registering metrics
	I1019 17:36:04.510277       1 controller.go:711] "Syncing nftables rules"
	I1019 17:36:13.202325       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:13.202364       1 main.go:301] handling current node
	I1019 17:36:23.203787       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:23.203892       1 main.go:301] handling current node
	I1019 17:36:33.203706       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:33.203733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab] <==
	I1019 17:35:31.787310       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:35:31.798384       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:35:31.832203       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:35:31.848515       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:35:31.848659       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:35:31.848735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:35:31.876232       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:35:31.876269       1 policy_source.go:240] refreshing policies
	I1019 17:35:31.881746       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:35:31.881863       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:35:31.881877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:35:31.910888       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:35:31.925467       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:35:31.958176       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:35:32.250348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:35:32.498000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:35:33.008813       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:35:33.077023       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:35:33.115724       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:35:33.128208       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:35:33.198120       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.194.188"}
	I1019 17:35:33.217771       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.58.147"}
	I1019 17:35:35.477493       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:35:35.729718       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:35:35.776858       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b] <==
	I1019 17:35:35.230672       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:35:35.230678       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:35:35.230685       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:35:35.230762       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:35:35.232288       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:35:35.238040       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:35:35.239298       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:35:35.240416       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:35:35.242572       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:35:35.244890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:35:35.269011       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:35:35.270383       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:35:35.270458       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:35:35.270515       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:35:35.270691       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:35:35.270751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:35:35.271041       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:35:35.272236       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:35:35.272295       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:35:35.283037       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:35:35.292340       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:35:35.292361       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:35:35.292369       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:35:35.734888       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1019 17:35:35.738522       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [2b961a279052eaef38f32facfd740a3beaeef53423104d9f42c20da1ee788acd] <==
	I1019 17:35:33.233689       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:35:33.341245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:35:33.451314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:35:33.451433       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:35:33.451604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:35:33.471179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:35:33.471230       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:35:33.475213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:35:33.475723       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:35:33.475977       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:35:33.478171       1 config.go:200] "Starting service config controller"
	I1019 17:35:33.478372       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:35:33.478421       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:35:33.478428       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:35:33.478455       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:35:33.478460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:35:33.484518       1 config.go:309] "Starting node config controller"
	I1019 17:35:33.484591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:35:33.484621       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:35:33.579457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:35:33.579469       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:35:33.579523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431] <==
	I1019 17:35:30.667000       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:35:32.172256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:35:32.172353       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:35:32.177725       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:35:32.177937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.179684       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.177913       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:35:32.179782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:35:32.177952       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:35:32.185545       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:35:32.177965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:35:32.280368       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:35:32.280553       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.286243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: I1019 17:35:35.760122     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f55b8585-f906-45b9-9eee-4978b9ccde17-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qqbvj\" (UID: \"f55b8585-f906-45b9-9eee-4978b9ccde17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qqbvj"
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: I1019 17:35:35.760259     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbvk\" (UniqueName: \"kubernetes.io/projected/e6bdf3e7-11f3-4453-b8be-ef8d46c59338-kube-api-access-txbvk\") pod \"dashboard-metrics-scraper-6ffb444bf9-sz9f5\" (UID: \"e6bdf3e7-11f3-4453-b8be-ef8d46c59338\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5"
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: W1019 17:35:35.997756     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77 WatchSource:0}: Error finding container c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77: Status 404 returned error can't find the container with id c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77
	Oct 19 17:35:36 embed-certs-296314 kubelet[778]: W1019 17:35:36.020185     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0 WatchSource:0}: Error finding container dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0: Status 404 returned error can't find the container with id dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0
	Oct 19 17:35:40 embed-certs-296314 kubelet[778]: I1019 17:35:40.425907     778 scope.go:117] "RemoveContainer" containerID="c1925e9b495c8e9bb365355d1e636ed5fcd30dc5d5a69081848da9c826d08241"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: I1019 17:35:41.433697     778 scope.go:117] "RemoveContainer" containerID="c1925e9b495c8e9bb365355d1e636ed5fcd30dc5d5a69081848da9c826d08241"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: I1019 17:35:41.434047     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: E1019 17:35:41.434237     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:42 embed-certs-296314 kubelet[778]: I1019 17:35:42.446355     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:42 embed-certs-296314 kubelet[778]: E1019 17:35:42.446487     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:44 embed-certs-296314 kubelet[778]: I1019 17:35:44.386080     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:44 embed-certs-296314 kubelet[778]: E1019 17:35:44.386257     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.317923     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.486102     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.486389     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: E1019 17:35:56.486584     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.507150     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qqbvj" podStartSLOduration=11.979621191 podStartE2EDuration="21.507131672s" podCreationTimestamp="2025-10-19 17:35:35 +0000 UTC" firstStartedPulling="2025-10-19 17:35:36.030455231 +0000 UTC m=+9.927144244" lastFinishedPulling="2025-10-19 17:35:45.557965703 +0000 UTC m=+19.454654725" observedRunningTime="2025-10-19 17:35:46.480529329 +0000 UTC m=+20.377218376" watchObservedRunningTime="2025-10-19 17:35:56.507131672 +0000 UTC m=+30.403820686"
	Oct 19 17:36:03 embed-certs-296314 kubelet[778]: I1019 17:36:03.509778     778 scope.go:117] "RemoveContainer" containerID="93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e"
	Oct 19 17:36:04 embed-certs-296314 kubelet[778]: I1019 17:36:04.386954     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:36:04 embed-certs-296314 kubelet[778]: E1019 17:36:04.387146     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:36:16 embed-certs-296314 kubelet[778]: I1019 17:36:16.318991     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:36:16 embed-certs-296314 kubelet[778]: E1019 17:36:16.319180     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [98f40f985abe20795cb17701cf451856590428d95c58119f4bb35737e7c3454c] <==
	2025/10/19 17:35:45 Using namespace: kubernetes-dashboard
	2025/10/19 17:35:45 Using in-cluster config to connect to apiserver
	2025/10/19 17:35:45 Using secret token for csrf signing
	2025/10/19 17:35:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:35:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:35:45 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:35:45 Generating JWE encryption key
	2025/10/19 17:35:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:35:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:35:46 Initializing JWE encryption key from synchronized object
	2025/10/19 17:35:46 Creating in-cluster Sidecar client
	2025/10/19 17:35:46 Serving insecurely on HTTP port: 9090
	2025/10/19 17:35:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:36:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:35:45 Starting overwatch
	
	
	==> storage-provisioner [93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e] <==
	I1019 17:35:32.791228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:36:02.793250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07] <==
	W1019 17:36:03.580891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:07.035738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:11.296153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:14.893926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:17.947283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.970087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.977977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:36:20.978219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:36:20.978429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc!
	I1019 17:36:20.979143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3e02ef7-e677-43d7-8f2d-de68a05d0331", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc became leader
	W1019 17:36:20.981269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.988265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:36:21.079223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc!
	W1019 17:36:22.992005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:23.013613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:25.017540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:25.023172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:27.026297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:27.036414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:29.040711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:29.053597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:31.068410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:31.082083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:33.095948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:33.119360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-296314 -n embed-certs-296314
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-296314 -n embed-certs-296314: exit status 2 (565.011785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-296314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-296314
helpers_test.go:243: (dbg) docker inspect embed-certs-296314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	        "Created": "2025-10-19T17:33:35.165314955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:35:18.82347683Z",
	            "FinishedAt": "2025-10-19T17:35:17.947438069Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/hosts",
	        "LogPath": "/var/lib/docker/containers/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d-json.log",
	        "Name": "/embed-certs-296314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-296314:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-296314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d",
	                "LowerDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae25daf02d6d9cfda516417e03b1e9cf8d8145db087ba444e79620e70c79bedf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-296314",
	                "Source": "/var/lib/docker/volumes/embed-certs-296314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-296314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-296314",
	                "name.minikube.sigs.k8s.io": "embed-certs-296314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd708428eb1fb90ebe3090945eb274eacb194a4ba95e86919142265c2928f213",
	            "SandboxKey": "/var/run/docker/netns/bd708428eb1f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-296314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d7:68:58:69:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b85768c3935a46e7e3c1643ba28d42a950563959f3252b2b534926365c369610",
	                    "EndpointID": "f2cce0ca8c5d56e9157e77255a0811d2a144668cd996a91e1f99ac70d6e43204",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-296314",
	                        "5854ebe0a2d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314: exit status 2 (534.30983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-296314 logs -n 25: (1.924565396s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-038781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ stop    │ -p no-preload-038781 --alsologtostderr -v=3                                                                                                                              │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ image   │ old-k8s-version-125363 image list --format=json                                                                                                                          │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ pause   │ -p old-k8s-version-125363 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                               │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                              │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                     │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                     │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                          │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                             │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                              │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                             │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:36:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:36:23.056552  245420 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:36:23.056740  245420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:23.056773  245420 out.go:374] Setting ErrFile to fd 2...
	I1019 17:36:23.056793  245420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:23.057557  245420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:36:23.058029  245420 out.go:368] Setting JSON to false
	I1019 17:36:23.059087  245420 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4731,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:36:23.059167  245420 start.go:143] virtualization:  
	I1019 17:36:23.062741  245420 out.go:179] * [default-k8s-diff-port-370596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:36:23.066560  245420 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:36:23.066684  245420 notify.go:221] Checking for updates...
	I1019 17:36:23.072593  245420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:36:23.075570  245420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:36:23.078323  245420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:36:23.081292  245420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:36:23.084155  245420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:36:23.087461  245420 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:23.088147  245420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:36:23.114962  245420 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:36:23.115082  245420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:23.189771  245420 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:36:23.180539878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:23.189878  245420 docker.go:319] overlay module found
	I1019 17:36:23.192966  245420 out.go:179] * Using the docker driver based on existing profile
	I1019 17:36:23.195796  245420 start.go:309] selected driver: docker
	I1019 17:36:23.195816  245420 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:23.195919  245420 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:36:23.196632  245420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:23.257313  245420 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:36:23.248194418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:23.257656  245420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:36:23.257696  245420 cni.go:84] Creating CNI manager for ""
	I1019 17:36:23.257752  245420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:36:23.257788  245420 start.go:353] cluster config:
	{Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:23.262798  245420 out.go:179] * Starting "default-k8s-diff-port-370596" primary control-plane node in "default-k8s-diff-port-370596" cluster
	I1019 17:36:23.265725  245420 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:36:23.268675  245420 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:36:23.271401  245420 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:23.271469  245420 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:36:23.271484  245420 cache.go:59] Caching tarball of preloaded images
	I1019 17:36:23.271489  245420 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:36:23.271599  245420 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:36:23.271611  245420 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:36:23.271742  245420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:36:23.292279  245420 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:36:23.292303  245420 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:36:23.292317  245420 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:36:23.292340  245420 start.go:360] acquireMachinesLock for default-k8s-diff-port-370596: {Name:mk4e5a46aec1705453bccb79fee591d547fbb19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:36:23.292404  245420 start.go:364] duration metric: took 33.166µs to acquireMachinesLock for "default-k8s-diff-port-370596"
	I1019 17:36:23.292426  245420 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:36:23.292433  245420 fix.go:54] fixHost starting: 
	I1019 17:36:23.292680  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:23.308913  245420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-370596: state=Stopped err=<nil>
	W1019 17:36:23.308955  245420 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:36:23.312189  245420 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-370596" ...
	I1019 17:36:23.312268  245420 cli_runner.go:164] Run: docker start default-k8s-diff-port-370596
	I1019 17:36:23.548713  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:23.578694  245420 kic.go:430] container "default-k8s-diff-port-370596" state is running.
	I1019 17:36:23.579078  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:23.601908  245420 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/config.json ...
	I1019 17:36:23.602140  245420 machine.go:94] provisionDockerMachine start ...
	I1019 17:36:23.602206  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:23.628214  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:23.628539  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:23.628554  245420 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:36:23.629167  245420 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 17:36:26.778112  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:36:26.778192  245420 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-370596"
	I1019 17:36:26.778282  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:26.796187  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:26.796565  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:26.796582  245420 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-370596 && echo "default-k8s-diff-port-370596" | sudo tee /etc/hostname
	I1019 17:36:26.955800  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-370596
	
	I1019 17:36:26.955877  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:26.974068  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:26.974376  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:26.974397  245420 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-370596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-370596/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-370596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:36:27.126724  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:36:27.126751  245420 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:36:27.126770  245420 ubuntu.go:190] setting up certificates
	I1019 17:36:27.126779  245420 provision.go:84] configureAuth start
	I1019 17:36:27.126839  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:27.145022  245420 provision.go:143] copyHostCerts
	I1019 17:36:27.145091  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:36:27.145113  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:36:27.145190  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:36:27.145331  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:36:27.145343  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:36:27.145371  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:36:27.145427  245420 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:36:27.145434  245420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:36:27.145458  245420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:36:27.145511  245420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-370596 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-370596 localhost minikube]
	I1019 17:36:27.844619  245420 provision.go:177] copyRemoteCerts
	I1019 17:36:27.844685  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:36:27.844732  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:27.863019  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:27.966558  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:36:27.985775  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 17:36:28.006801  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:36:28.026951  245420 provision.go:87] duration metric: took 900.158192ms to configureAuth
	I1019 17:36:28.026983  245420 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:36:28.027219  245420 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:28.027357  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.045805  245420 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:28.046130  245420 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1019 17:36:28.046154  245420 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:36:28.358453  245420 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:36:28.358572  245420 machine.go:97] duration metric: took 4.756416202s to provisionDockerMachine
	I1019 17:36:28.358604  245420 start.go:293] postStartSetup for "default-k8s-diff-port-370596" (driver="docker")
	I1019 17:36:28.358629  245420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:36:28.358729  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:36:28.358810  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.380768  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.494686  245420 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:36:28.498206  245420 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:36:28.498281  245420 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:36:28.498301  245420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:36:28.498368  245420 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:36:28.498446  245420 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:36:28.498603  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:36:28.506199  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:36:28.524813  245420 start.go:296] duration metric: took 166.179375ms for postStartSetup
	I1019 17:36:28.524919  245420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:36:28.524974  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.542002  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.644044  245420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:36:28.649145  245420 fix.go:56] duration metric: took 5.356705609s for fixHost
	I1019 17:36:28.649171  245420 start.go:83] releasing machines lock for "default-k8s-diff-port-370596", held for 5.356755955s
	I1019 17:36:28.649268  245420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-370596
	I1019 17:36:28.666520  245420 ssh_runner.go:195] Run: cat /version.json
	I1019 17:36:28.666721  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.666638  245420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:36:28.666834  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:28.685731  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.686937  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:28.790477  245420 ssh_runner.go:195] Run: systemctl --version
	I1019 17:36:28.891140  245420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:36:28.943386  245420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:36:28.949395  245420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:36:28.949477  245420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:36:28.964133  245420 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:36:28.964158  245420 start.go:496] detecting cgroup driver to use...
	I1019 17:36:28.964195  245420 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:36:28.964242  245420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:36:28.985553  245420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:36:29.004136  245420 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:36:29.004206  245420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:36:29.022374  245420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:36:29.037217  245420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:36:29.186372  245420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:36:29.351871  245420 docker.go:234] disabling docker service ...
	I1019 17:36:29.351932  245420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:36:29.372497  245420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:36:29.391821  245420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:36:29.534143  245420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:36:29.720579  245420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:36:29.738685  245420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:36:29.754628  245420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:36:29.754703  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.764798  245420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:36:29.764910  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.774210  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.783621  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.793726  245420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:36:29.802122  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.817657  245420 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.829977  245420 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:29.839710  245420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:36:29.850341  245420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:36:29.857869  245420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:36:29.997259  245420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:36:30.236206  245420 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:36:30.236351  245420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:36:30.241014  245420 start.go:564] Will wait 60s for crictl version
	I1019 17:36:30.241176  245420 ssh_runner.go:195] Run: which crictl
	I1019 17:36:30.246312  245420 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:36:30.282620  245420 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:36:30.282754  245420 ssh_runner.go:195] Run: crio --version
	I1019 17:36:30.319012  245420 ssh_runner.go:195] Run: crio --version
	I1019 17:36:30.364348  245420 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:36:30.367302  245420 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-370596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:36:30.386842  245420 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 17:36:30.391472  245420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:36:30.401707  245420 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:36:30.401828  245420 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:30.401890  245420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:36:30.440774  245420 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:36:30.440802  245420 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:36:30.440858  245420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:36:30.469373  245420 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:36:30.469396  245420 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:36:30.469405  245420 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1019 17:36:30.469505  245420 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-370596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:36:30.469622  245420 ssh_runner.go:195] Run: crio config
	I1019 17:36:30.532054  245420 cni.go:84] Creating CNI manager for ""
	I1019 17:36:30.532076  245420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:36:30.532098  245420 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:36:30.532121  245420 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-370596 NodeName:default-k8s-diff-port-370596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:36:30.532258  245420 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-370596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:36:30.532335  245420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:36:30.540330  245420 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:36:30.540399  245420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:36:30.548303  245420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 17:36:30.561819  245420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:36:30.574963  245420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1019 17:36:30.588557  245420 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:36:30.592377  245420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:36:30.602465  245420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:36:30.730864  245420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:36:30.750518  245420 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596 for IP: 192.168.76.2
	I1019 17:36:30.750566  245420 certs.go:195] generating shared ca certs ...
	I1019 17:36:30.750596  245420 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:30.750739  245420 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:36:30.750797  245420 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:36:30.750808  245420 certs.go:257] generating profile certs ...
	I1019 17:36:30.750894  245420 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/client.key
	I1019 17:36:30.750955  245420 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key.27fdbacf
	I1019 17:36:30.751000  245420 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key
	I1019 17:36:30.751108  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:36:30.751142  245420 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:36:30.751155  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:36:30.751182  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:36:30.751209  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:36:30.751233  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:36:30.751276  245420 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:36:30.751943  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:36:30.801692  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:36:30.883761  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:36:30.975027  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:36:31.012292  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 17:36:31.046318  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:36:31.068429  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:36:31.098097  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/default-k8s-diff-port-370596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:36:31.135393  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:36:31.183320  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:36:31.207172  245420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:36:31.227501  245420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:36:31.244186  245420 ssh_runner.go:195] Run: openssl version
	I1019 17:36:31.251423  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:36:31.260900  245420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:36:31.265597  245420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:36:31.265669  245420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:36:31.315399  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:36:31.324214  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:36:31.333413  245420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:36:31.339766  245420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:36:31.339833  245420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:36:31.384056  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:36:31.393025  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:36:31.402382  245420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:31.406313  245420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:31.406388  245420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:31.453449  245420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:36:31.474430  245420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:36:31.487471  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:36:31.624913  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:36:31.807469  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:36:31.982814  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:36:32.066721  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:36:32.114167  245420 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:36:32.209242  245420 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-370596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-370596 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:32.209337  245420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:36:32.209407  245420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:36:32.268085  245420 cri.go:89] found id: "5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c"
	I1019 17:36:32.268111  245420 cri.go:89] found id: "aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b"
	I1019 17:36:32.268116  245420 cri.go:89] found id: "d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe"
	I1019 17:36:32.268130  245420 cri.go:89] found id: "195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd"
	I1019 17:36:32.268134  245420 cri.go:89] found id: ""
	I1019 17:36:32.268181  245420 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:36:32.294124  245420 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:36:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:36:32.294216  245420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:36:32.321033  245420 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:36:32.321055  245420 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:36:32.321122  245420 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:36:32.349998  245420 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:36:32.351039  245420 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-370596" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:36:32.351689  245420 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-370596" cluster setting kubeconfig missing "default-k8s-diff-port-370596" context setting]
	I1019 17:36:32.352563  245420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:32.354726  245420 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:36:32.378335  245420 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 17:36:32.378372  245420 kubeadm.go:602] duration metric: took 57.310999ms to restartPrimaryControlPlane
	I1019 17:36:32.378381  245420 kubeadm.go:403] duration metric: took 169.150224ms to StartCluster
	I1019 17:36:32.378395  245420 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:32.378451  245420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:36:32.380671  245420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:32.380985  245420 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:36:32.381288  245420 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:36:32.381535  245420 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-370596"
	I1019 17:36:32.381560  245420 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-370596"
	W1019 17:36:32.381577  245420 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:36:32.381600  245420 host.go:66] Checking if "default-k8s-diff-port-370596" exists ...
	I1019 17:36:32.382087  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:32.381199  245420 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:32.382657  245420 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-370596"
	I1019 17:36:32.382676  245420 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-370596"
	W1019 17:36:32.382683  245420 addons.go:248] addon dashboard should already be in state true
	I1019 17:36:32.382896  245420 host.go:66] Checking if "default-k8s-diff-port-370596" exists ...
	I1019 17:36:32.383525  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:32.383733  245420 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-370596"
	I1019 17:36:32.383789  245420 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-370596"
	I1019 17:36:32.384066  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:32.387001  245420 out.go:179] * Verifying Kubernetes components...
	I1019 17:36:32.390211  245420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:36:32.421893  245420 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:36:32.424915  245420 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:36:32.427715  245420 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:36:32.427753  245420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:36:32.427822  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:32.441609  245420 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:36:32.444744  245420 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:36:32.444769  245420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:36:32.444834  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:32.464306  245420 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-370596"
	W1019 17:36:32.464329  245420 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:36:32.464354  245420 host.go:66] Checking if "default-k8s-diff-port-370596" exists ...
	I1019 17:36:32.464782  245420 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:36:32.478978  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:32.509923  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:32.515033  245420 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:36:32.515053  245420 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:36:32.515111  245420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:36:32.553057  245420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:36:32.837118  245420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:36:32.837190  245420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:36:32.855125  245420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:36:32.882601  245420 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-370596" to be "Ready" ...
	I1019 17:36:32.893953  245420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:36:32.913843  245420 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:36:32.913862  245420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:36:32.968413  245420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:36:33.035354  245420 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:36:33.035375  245420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	
	
	==> CRI-O <==
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.513189927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9b9623b8-2269-42ee-8264-478776f3915e name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.514660536Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3ea585e5-4134-4a29-bdc9-e6068c43eef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.514923449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526397907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526859362Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1bfe1e42b365b033f65ebcbee5f2676b4c4c61dc4a96f433b4a094bfa5328753/merged/etc/passwd: no such file or directory"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.526905861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1bfe1e42b365b033f65ebcbee5f2676b4c4c61dc4a96f433b4a094bfa5328753/merged/etc/group: no such file or directory"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.527395812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.548080976Z" level=info msg="Created container b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07: kube-system/storage-provisioner/storage-provisioner" id=3ea585e5-4134-4a29-bdc9-e6068c43eef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.549071882Z" level=info msg="Starting container: b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07" id=38c31fd8-5c14-4e8b-864c-aebb056c4bdd name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:36:03 embed-certs-296314 crio[652]: time="2025-10-19T17:36:03.551005948Z" level=info msg="Started container" PID=1641 containerID=b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07 description=kube-system/storage-provisioner/storage-provisioner id=38c31fd8-5c14-4e8b-864c-aebb056c4bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a3de4d93e94448df972025a2e807ab8264a28a8cead47f4a57435893fe2c2d0
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.202665965Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206274278Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206308617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.206330878Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209420332Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209462778Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.209482068Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.212854628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.212888253Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.21291524Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216673734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216708335Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.216729808Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.219580063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:36:13 embed-certs-296314 crio[652]: time="2025-10-19T17:36:13.21961103Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b8323b93b0c18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   8a3de4d93e944       storage-provisioner                          kube-system
	a03a9a22e4c9c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           39 seconds ago       Exited              dashboard-metrics-scraper   2                   c7b3e97cc73fa       dashboard-metrics-scraper-6ffb444bf9-sz9f5   kubernetes-dashboard
	98f40f985abe2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   dd758cfa5e40b       kubernetes-dashboard-855c9754f9-qqbvj        kubernetes-dashboard
	d5351136756ea       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   5a338b7f5cc39       coredns-66bc5c9577-2xbw2                     kube-system
	7ae94595875e7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   2656135bd177b       busybox                                      default
	2b961a279052e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   bf45195b73c05       kube-proxy-5sj42                             kube-system
	2ce143425275a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   d847e6abcaeeb       kindnet-7nwqx                                kube-system
	93e57ed7f8473       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   8a3de4d93e944       storage-provisioner                          kube-system
	419c95753ba61       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   58d0af3067288       kube-controller-manager-embed-certs-296314   kube-system
	f1ebcf0400230       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0ece62fda443c       kube-scheduler-embed-certs-296314            kube-system
	601d05c29e65e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c692e82082920       kube-apiserver-embed-certs-296314            kube-system
	1b872d3de58c8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c13564e4e2791       etcd-embed-certs-296314                      kube-system
	
	
	==> coredns [d5351136756eab8472ddeeb973620be9c36bf0fe3334b6702fa621c82598d70b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50762 - 495 "HINFO IN 1850847966072531663.994089535842801053. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.003980045s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-296314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-296314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=embed-certs-296314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_34_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-296314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:36:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:36:02 +0000   Sun, 19 Oct 2025 17:34:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-296314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d8253982-2ff8-43b9-b6f4-cc698577d51f
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-2xbw2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-296314                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-7nwqx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-embed-certs-296314             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-296314    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-5sj42                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-embed-certs-296314             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sz9f5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qqbvj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m40s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m40s)  kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m40s)  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m27s                  node-controller  Node embed-certs-296314 event: Registered Node embed-certs-296314 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-296314 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-296314 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-296314 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-296314 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                    node-controller  Node embed-certs-296314 event: Registered Node embed-certs-296314 in Controller
	
	
	==> dmesg <==
	[Oct19 17:13] overlayfs: idmapped layers are currently not supported
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1b872d3de58c84db020f0ee9ad021aaf524cc7e1a2f5753ee9ccc615f3d60b9e] <==
	{"level":"warn","ts":"2025-10-19T17:35:30.040775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.111931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.149920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.168796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.200503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.223319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.239588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.261221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.273801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.292608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.309632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.339113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.359098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.373937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.398132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.417281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.448154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.467527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.489906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.506590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.538475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.570509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.610133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.657343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:35:30.770509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:36:36 up  1:19,  0 user,  load average: 3.49, 3.78, 3.47
	Linux embed-certs-296314 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ce143425275aa97757397246ab5e496dea31d6212964223b640c12d73d5bd87] <==
	I1019 17:35:32.956790       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:35:32.996448       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:35:33.002817       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:35:33.002849       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:35:33.002869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:35:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:35:33.209580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:35:33.209793       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:35:33.209840       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:35:33.211189       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:36:03.197584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 17:36:03.211159       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:36:03.211356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:36:03.211176       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 17:36:04.510088       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:36:04.510189       1 metrics.go:72] Registering metrics
	I1019 17:36:04.510277       1 controller.go:711] "Syncing nftables rules"
	I1019 17:36:13.202325       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:13.202364       1 main.go:301] handling current node
	I1019 17:36:23.203787       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:23.203892       1 main.go:301] handling current node
	I1019 17:36:33.203706       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 17:36:33.203733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [601d05c29e65eea670a097054cee3344d68d6b3c679c2b5a8588e8ba24deefab] <==
	I1019 17:35:31.787310       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:35:31.798384       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:35:31.832203       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:35:31.848515       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:35:31.848659       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:35:31.848735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:35:31.876232       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 17:35:31.876269       1 policy_source.go:240] refreshing policies
	I1019 17:35:31.881746       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:35:31.881863       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:35:31.881877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:35:31.910888       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:35:31.925467       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:35:31.958176       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:35:32.250348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:35:32.498000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:35:33.008813       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:35:33.077023       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:35:33.115724       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:35:33.128208       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:35:33.198120       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.194.188"}
	I1019 17:35:33.217771       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.58.147"}
	I1019 17:35:35.477493       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:35:35.729718       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:35:35.776858       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [419c95753ba617267c87fde14322f90237df72a7488e84bda081428a2e533e7b] <==
	I1019 17:35:35.230672       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 17:35:35.230678       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 17:35:35.230685       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 17:35:35.230762       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:35:35.232288       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 17:35:35.238040       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:35:35.239298       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:35:35.240416       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:35:35.242572       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:35:35.244890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:35:35.269011       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:35:35.270383       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:35:35.270458       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:35:35.270515       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:35:35.270691       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:35:35.270751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:35:35.271041       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:35:35.272236       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:35:35.272295       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:35:35.283037       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:35:35.292340       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:35:35.292361       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:35:35.292369       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:35:35.734888       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1019 17:35:35.738522       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [2b961a279052eaef38f32facfd740a3beaeef53423104d9f42c20da1ee788acd] <==
	I1019 17:35:33.233689       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:35:33.341245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:35:33.451314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:35:33.451433       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:35:33.451604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:35:33.471179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:35:33.471230       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:35:33.475213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:35:33.475723       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:35:33.475977       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:35:33.478171       1 config.go:200] "Starting service config controller"
	I1019 17:35:33.478372       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:35:33.478421       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:35:33.478428       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:35:33.478455       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:35:33.478460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:35:33.484518       1 config.go:309] "Starting node config controller"
	I1019 17:35:33.484591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:35:33.484621       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:35:33.579457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:35:33.579469       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:35:33.579523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1ebcf0400230671abb8861c8f1296b2ddc8747887ce982a7032673710caf431] <==
	I1019 17:35:30.667000       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:35:32.172256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:35:32.172353       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:35:32.177725       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:35:32.177937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.179684       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.177913       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:35:32.179782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:35:32.177952       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:35:32.185545       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:35:32.177965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:35:32.280368       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:35:32.280553       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:35:32.286243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: I1019 17:35:35.760122     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f55b8585-f906-45b9-9eee-4978b9ccde17-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qqbvj\" (UID: \"f55b8585-f906-45b9-9eee-4978b9ccde17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qqbvj"
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: I1019 17:35:35.760259     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbvk\" (UniqueName: \"kubernetes.io/projected/e6bdf3e7-11f3-4453-b8be-ef8d46c59338-kube-api-access-txbvk\") pod \"dashboard-metrics-scraper-6ffb444bf9-sz9f5\" (UID: \"e6bdf3e7-11f3-4453-b8be-ef8d46c59338\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5"
	Oct 19 17:35:35 embed-certs-296314 kubelet[778]: W1019 17:35:35.997756     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77 WatchSource:0}: Error finding container c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77: Status 404 returned error can't find the container with id c7b3e97cc73fa12e2ef0ffa82a7514899e08c518f0a8f67de87209a7d633ba77
	Oct 19 17:35:36 embed-certs-296314 kubelet[778]: W1019 17:35:36.020185     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5854ebe0a2d7930e336ade15b3def62c37e2c00f09a5bedb4504cb02b041d69d/crio-dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0 WatchSource:0}: Error finding container dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0: Status 404 returned error can't find the container with id dd758cfa5e40b3163d715135fcdccf6f4c09f289a6074cefb713ee9fd8be94e0
	Oct 19 17:35:40 embed-certs-296314 kubelet[778]: I1019 17:35:40.425907     778 scope.go:117] "RemoveContainer" containerID="c1925e9b495c8e9bb365355d1e636ed5fcd30dc5d5a69081848da9c826d08241"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: I1019 17:35:41.433697     778 scope.go:117] "RemoveContainer" containerID="c1925e9b495c8e9bb365355d1e636ed5fcd30dc5d5a69081848da9c826d08241"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: I1019 17:35:41.434047     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:41 embed-certs-296314 kubelet[778]: E1019 17:35:41.434237     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:42 embed-certs-296314 kubelet[778]: I1019 17:35:42.446355     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:42 embed-certs-296314 kubelet[778]: E1019 17:35:42.446487     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:44 embed-certs-296314 kubelet[778]: I1019 17:35:44.386080     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:44 embed-certs-296314 kubelet[778]: E1019 17:35:44.386257     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.317923     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.486102     778 scope.go:117] "RemoveContainer" containerID="3ce1ae232e27065cdc9aee7e8f2d40df337880ba80710bdcc454e778412ba843"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.486389     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: E1019 17:35:56.486584     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:35:56 embed-certs-296314 kubelet[778]: I1019 17:35:56.507150     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qqbvj" podStartSLOduration=11.979621191 podStartE2EDuration="21.507131672s" podCreationTimestamp="2025-10-19 17:35:35 +0000 UTC" firstStartedPulling="2025-10-19 17:35:36.030455231 +0000 UTC m=+9.927144244" lastFinishedPulling="2025-10-19 17:35:45.557965703 +0000 UTC m=+19.454654725" observedRunningTime="2025-10-19 17:35:46.480529329 +0000 UTC m=+20.377218376" watchObservedRunningTime="2025-10-19 17:35:56.507131672 +0000 UTC m=+30.403820686"
	Oct 19 17:36:03 embed-certs-296314 kubelet[778]: I1019 17:36:03.509778     778 scope.go:117] "RemoveContainer" containerID="93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e"
	Oct 19 17:36:04 embed-certs-296314 kubelet[778]: I1019 17:36:04.386954     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:36:04 embed-certs-296314 kubelet[778]: E1019 17:36:04.387146     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:36:16 embed-certs-296314 kubelet[778]: I1019 17:36:16.318991     778 scope.go:117] "RemoveContainer" containerID="a03a9a22e4c9c38922230beab6d6eab8c0c93a2d9a8ae3df3517b8bd305c04e0"
	Oct 19 17:36:16 embed-certs-296314 kubelet[778]: E1019 17:36:16.319180     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sz9f5_kubernetes-dashboard(e6bdf3e7-11f3-4453-b8be-ef8d46c59338)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sz9f5" podUID="e6bdf3e7-11f3-4453-b8be-ef8d46c59338"
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:36:29 embed-certs-296314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [98f40f985abe20795cb17701cf451856590428d95c58119f4bb35737e7c3454c] <==
	2025/10/19 17:35:45 Using namespace: kubernetes-dashboard
	2025/10/19 17:35:45 Using in-cluster config to connect to apiserver
	2025/10/19 17:35:45 Using secret token for csrf signing
	2025/10/19 17:35:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:35:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:35:45 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:35:45 Generating JWE encryption key
	2025/10/19 17:35:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:35:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:35:46 Initializing JWE encryption key from synchronized object
	2025/10/19 17:35:46 Creating in-cluster Sidecar client
	2025/10/19 17:35:46 Serving insecurely on HTTP port: 9090
	2025/10/19 17:35:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:36:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:35:45 Starting overwatch
	
	
	==> storage-provisioner [93e57ed7f8473a0c891f8066794b585dc8e89167476e00470494528ae25c959e] <==
	I1019 17:35:32.791228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:36:02.793250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b8323b93b0c18fabc08d666eaf5f6eec5beb58d95a5c4552ef83d82cf9818f07] <==
	W1019 17:36:11.296153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:14.893926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:17.947283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.970087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.977977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:36:20.978219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:36:20.978429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc!
	I1019 17:36:20.979143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3e02ef7-e677-43d7-8f2d-de68a05d0331", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc became leader
	W1019 17:36:20.981269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:20.988265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:36:21.079223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-296314_c3e1b3d0-70ba-456b-8b89-377585519ccc!
	W1019 17:36:22.992005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:23.013613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:25.017540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:25.023172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:27.026297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:27.036414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:29.040711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:29.053597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:31.068410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:31.082083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:33.095948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:33.119360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:35.135670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:36:35.144833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-296314 -n embed-certs-296314
E1019 17:36:37.342203    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-296314 -n embed-certs-296314: exit status 2 (601.250469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-296314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.136037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-633463
helpers_test.go:243: (dbg) docker inspect newest-cni-633463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	        "Created": "2025-10-19T17:36:48.723991016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:36:48.813702966Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hosts",
	        "LogPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea-json.log",
	        "Name": "/newest-cni-633463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-633463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-633463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	                "LowerDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-633463",
	                "Source": "/var/lib/docker/volumes/newest-cni-633463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-633463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-633463",
	                "name.minikube.sigs.k8s.io": "newest-cni-633463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d3472198e32694762336eff7c4304f2d7ba1101c1cb476565b2fefe602ad7c78",
	            "SandboxKey": "/var/run/docker/netns/d3472198e326",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-633463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:0b:f1:6d:80:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903462b71a8c585e1f826b3d07accd39a29c6c1814ddb40704a08f8813291f55",
	                    "EndpointID": "8f57e503ab76372984c9f82c23dc3b4a81fbe572219ddce3fffa8005cffa75c7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-633463",
	                        "dc48a98a25fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25: (1.123638574s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ delete  │ -p old-k8s-version-125363                                                                                                                                                                                                                     │ old-k8s-version-125363       │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:33 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:33 UTC │ 19 Oct 25 17:34 UTC │
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                                                                                                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:36:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:36:41.871785  248425 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:36:41.871970  248425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:41.871977  248425 out.go:374] Setting ErrFile to fd 2...
	I1019 17:36:41.871982  248425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:36:41.872274  248425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:36:41.872718  248425 out.go:368] Setting JSON to false
	I1019 17:36:41.873728  248425 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4750,"bootTime":1760890652,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:36:41.873806  248425 start.go:143] virtualization:  
	I1019 17:36:41.879052  248425 out.go:179] * [newest-cni-633463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:36:41.882038  248425 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:36:41.882096  248425 notify.go:221] Checking for updates...
	I1019 17:36:41.888113  248425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:36:41.891349  248425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:36:41.894483  248425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:36:41.897774  248425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:36:41.900351  248425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:36:41.903752  248425 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:41.903890  248425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:36:41.952206  248425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:36:41.952336  248425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:42.079481  248425 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:36:42.067749804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:42.079593  248425 docker.go:319] overlay module found
	I1019 17:36:42.083403  248425 out.go:179] * Using the docker driver based on user configuration
	I1019 17:36:39.463899  245420 node_ready.go:49] node "default-k8s-diff-port-370596" is "Ready"
	I1019 17:36:39.463931  245420 node_ready.go:38] duration metric: took 6.581298456s for node "default-k8s-diff-port-370596" to be "Ready" ...
	I1019 17:36:39.463945  245420 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:36:39.464008  245420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:36:42.015033  245420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.046587827s)
	I1019 17:36:42.015390  245420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.12136063s)
	I1019 17:36:42.181665  245420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.690023347s)
	I1019 17:36:42.181899  245420 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.717872804s)
	I1019 17:36:42.181924  245420 api_server.go:72] duration metric: took 9.800683897s to wait for apiserver process to appear ...
	I1019 17:36:42.181931  245420 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:36:42.181951  245420 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1019 17:36:42.185604  245420 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-370596 addons enable metrics-server
	
	I1019 17:36:42.188621  245420 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:36:42.086502  248425 start.go:309] selected driver: docker
	I1019 17:36:42.086664  248425 start.go:930] validating driver "docker" against <nil>
	I1019 17:36:42.086687  248425 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:36:42.087519  248425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:36:42.194342  248425 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 17:36:42.179534922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:36:42.194526  248425 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 17:36:42.194892  248425 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 17:36:42.195171  248425 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:36:42.198384  248425 out.go:179] * Using Docker driver with root privileges
	I1019 17:36:42.202002  248425 cni.go:84] Creating CNI manager for ""
	I1019 17:36:42.202144  248425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:36:42.202156  248425 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:36:42.202278  248425 start.go:353] cluster config:
	{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:36:42.205475  248425 out.go:179] * Starting "newest-cni-633463" primary control-plane node in "newest-cni-633463" cluster
	I1019 17:36:42.208321  248425 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:36:42.211312  248425 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:36:42.214275  248425 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:42.214328  248425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:36:42.214682  248425 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:36:42.214711  248425 cache.go:59] Caching tarball of preloaded images
	I1019 17:36:42.214814  248425 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:36:42.214835  248425 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:36:42.214961  248425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:36:42.214987  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json: {Name:mk9abb2138dea9642d522c4e9609a03db39ef5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:42.250108  248425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:36:42.250128  248425 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:36:42.250143  248425 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:36:42.250167  248425 start.go:360] acquireMachinesLock for newest-cni-633463: {Name:mk5bb6cb5b9b89fc5f7e65da679c1a55c56b4fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:36:42.250271  248425 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "newest-cni-633463"
	I1019 17:36:42.250297  248425 start.go:93] Provisioning new machine with config: &{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:36:42.250373  248425 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:36:42.193804  245420 addons.go:515] duration metric: took 9.812496662s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 17:36:42.199793  245420 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1019 17:36:42.201517  245420 api_server.go:141] control plane version: v1.34.1
	I1019 17:36:42.201547  245420 api_server.go:131] duration metric: took 19.609359ms to wait for apiserver health ...
	I1019 17:36:42.201558  245420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:36:42.212005  245420 system_pods.go:59] 8 kube-system pods found
	I1019 17:36:42.212048  245420 system_pods.go:61] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:36:42.212057  245420 system_pods.go:61] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:36:42.212065  245420 system_pods.go:61] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:36:42.212075  245420 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:36:42.212082  245420 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:36:42.212087  245420 system_pods.go:61] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:36:42.212097  245420 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:36:42.212106  245420 system_pods.go:61] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Running
	I1019 17:36:42.212112  245420 system_pods.go:74] duration metric: took 10.547804ms to wait for pod list to return data ...
	I1019 17:36:42.212127  245420 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:36:42.222330  245420 default_sa.go:45] found service account: "default"
	I1019 17:36:42.222359  245420 default_sa.go:55] duration metric: took 10.226174ms for default service account to be created ...
	I1019 17:36:42.222370  245420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:36:42.226255  245420 system_pods.go:86] 8 kube-system pods found
	I1019 17:36:42.226291  245420 system_pods.go:89] "coredns-66bc5c9577-vjhwx" [28906e96-8f1a-4fa8-94fd-78e3c3892116] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:36:42.226298  245420 system_pods.go:89] "etcd-default-k8s-diff-port-370596" [e056873c-66fb-4018-903e-f9523e5a8426] Running
	I1019 17:36:42.226305  245420 system_pods.go:89] "kindnet-6xvl9" [5dfab6e1-f690-4a7c-8b62-87160d9a8971] Running
	I1019 17:36:42.226311  245420 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-370596" [38943f6f-255a-45bc-8734-a1a291f82c16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:36:42.226318  245420 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-370596" [8f55c743-3d48-4daf-a874-3f818226f6c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:36:42.226323  245420 system_pods.go:89] "kube-proxy-24xql" [fe5d7c3b-6719-434c-acc5-8a85ea0f703a] Running
	I1019 17:36:42.226330  245420 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-370596" [320354a3-04ba-422b-91c2-bd26d91aa6e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:36:42.226334  245420 system_pods.go:89] "storage-provisioner" [157cf698-27a7-446b-9122-e046c021a004] Running
	I1019 17:36:42.226341  245420 system_pods.go:126] duration metric: took 3.965051ms to wait for k8s-apps to be running ...
	I1019 17:36:42.226349  245420 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:36:42.226406  245420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:36:42.243500  245420 system_svc.go:56] duration metric: took 17.141822ms WaitForService to wait for kubelet
	I1019 17:36:42.243533  245420 kubeadm.go:587] duration metric: took 9.862295234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:36:42.243562  245420 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:36:42.247495  245420 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:36:42.247527  245420 node_conditions.go:123] node cpu capacity is 2
	I1019 17:36:42.247540  245420 node_conditions.go:105] duration metric: took 3.973092ms to run NodePressure ...
	I1019 17:36:42.247555  245420 start.go:242] waiting for startup goroutines ...
	I1019 17:36:42.247566  245420 start.go:247] waiting for cluster config update ...
	I1019 17:36:42.247580  245420 start.go:256] writing updated cluster config ...
	I1019 17:36:42.247920  245420 ssh_runner.go:195] Run: rm -f paused
	I1019 17:36:42.257319  245420 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:36:42.262019  245420 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vjhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:36:42.255541  248425 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:36:42.255905  248425 start.go:159] libmachine.API.Create for "newest-cni-633463" (driver="docker")
	I1019 17:36:42.255948  248425 client.go:171] LocalClient.Create starting
	I1019 17:36:42.256017  248425 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem
	I1019 17:36:42.256051  248425 main.go:143] libmachine: Decoding PEM data...
	I1019 17:36:42.256064  248425 main.go:143] libmachine: Parsing certificate...
	I1019 17:36:42.256127  248425 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem
	I1019 17:36:42.256151  248425 main.go:143] libmachine: Decoding PEM data...
	I1019 17:36:42.256165  248425 main.go:143] libmachine: Parsing certificate...
	I1019 17:36:42.256713  248425 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:36:42.286328  248425 cli_runner.go:211] docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:36:42.286415  248425 network_create.go:284] running [docker network inspect newest-cni-633463] to gather additional debugging logs...
	I1019 17:36:42.286434  248425 cli_runner.go:164] Run: docker network inspect newest-cni-633463
	W1019 17:36:42.303275  248425 cli_runner.go:211] docker network inspect newest-cni-633463 returned with exit code 1
	I1019 17:36:42.303311  248425 network_create.go:287] error running [docker network inspect newest-cni-633463]: docker network inspect newest-cni-633463: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-633463 not found
	I1019 17:36:42.303325  248425 network_create.go:289] output of [docker network inspect newest-cni-633463]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-633463 not found
	
	** /stderr **
	I1019 17:36:42.303416  248425 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:36:42.323906  248425 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
	I1019 17:36:42.324228  248425 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74bebb68d32f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:9e:84:17:01:b0} reservation:<nil>}
	I1019 17:36:42.324582  248425 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9382370e2eea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:16:7c:3f:44:e1} reservation:<nil>}
	I1019 17:36:42.324897  248425 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ae64488c7e7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:83:47:36:e5:5b} reservation:<nil>}
	I1019 17:36:42.325327  248425 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b9a50}
	I1019 17:36:42.325352  248425 network_create.go:124] attempt to create docker network newest-cni-633463 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 17:36:42.325413  248425 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-633463 newest-cni-633463
	I1019 17:36:42.407589  248425 network_create.go:108] docker network newest-cni-633463 192.168.85.0/24 created
	I1019 17:36:42.407620  248425 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-633463" container
	I1019 17:36:42.407713  248425 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:36:42.440886  248425 cli_runner.go:164] Run: docker volume create newest-cni-633463 --label name.minikube.sigs.k8s.io=newest-cni-633463 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:36:42.460360  248425 oci.go:103] Successfully created a docker volume newest-cni-633463
	I1019 17:36:42.460452  248425 cli_runner.go:164] Run: docker run --rm --name newest-cni-633463-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-633463 --entrypoint /usr/bin/test -v newest-cni-633463:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:36:43.050168  248425 oci.go:107] Successfully prepared a docker volume newest-cni-633463
	I1019 17:36:43.050227  248425 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:43.050246  248425 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:36:43.050320  248425 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-633463:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 17:36:44.267825  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:36:46.268904  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	I1019 17:36:48.585745  248425 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-633463:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.535385367s)
	I1019 17:36:48.585772  248425 kic.go:203] duration metric: took 5.535523305s to extract preloaded images to volume ...
	W1019 17:36:48.585913  248425 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 17:36:48.586012  248425 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:36:48.704550  248425 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-633463 --name newest-cni-633463 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-633463 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-633463 --network newest-cni-633463 --ip 192.168.85.2 --volume newest-cni-633463:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:36:49.102686  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Running}}
	I1019 17:36:49.128091  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:36:49.152457  248425 cli_runner.go:164] Run: docker exec newest-cni-633463 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:36:49.225842  248425 oci.go:144] the created container "newest-cni-633463" has a running status.
	I1019 17:36:49.225873  248425 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa...
	I1019 17:36:49.481793  248425 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:36:49.512190  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:36:49.536533  248425 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:36:49.536551  248425 kic_runner.go:114] Args: [docker exec --privileged newest-cni-633463 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:36:49.613495  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:36:49.642700  248425 machine.go:94] provisionDockerMachine start ...
	I1019 17:36:49.642786  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:49.671312  248425 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:49.671722  248425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1019 17:36:49.671739  248425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:36:49.672904  248425 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36760->127.0.0.1:33123: read: connection reset by peer
	W1019 17:36:48.768829  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:36:51.268197  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	I1019 17:36:52.834714  248425 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:36:52.834737  248425 ubuntu.go:182] provisioning hostname "newest-cni-633463"
	I1019 17:36:52.834799  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:52.859944  248425 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:52.860256  248425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1019 17:36:52.860268  248425 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-633463 && echo "newest-cni-633463" | sudo tee /etc/hostname
	I1019 17:36:53.037118  248425 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:36:53.037283  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:53.057122  248425 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:53.057458  248425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1019 17:36:53.057492  248425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-633463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-633463/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-633463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:36:53.215278  248425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:36:53.215310  248425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:36:53.215365  248425 ubuntu.go:190] setting up certificates
	I1019 17:36:53.215375  248425 provision.go:84] configureAuth start
	I1019 17:36:53.215463  248425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:36:53.242168  248425 provision.go:143] copyHostCerts
	I1019 17:36:53.242236  248425 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:36:53.242244  248425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:36:53.242315  248425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:36:53.242744  248425 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:36:53.242799  248425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:36:53.242971  248425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:36:53.243433  248425 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:36:53.243451  248425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:36:53.243500  248425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:36:53.243565  248425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-633463 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-633463]
	I1019 17:36:53.526938  248425 provision.go:177] copyRemoteCerts
	I1019 17:36:53.527053  248425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:36:53.527139  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:53.545346  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:36:53.650782  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:36:53.678155  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:36:53.703078  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:36:53.729158  248425 provision.go:87] duration metric: took 513.755041ms to configureAuth
	I1019 17:36:53.729195  248425 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:36:53.729414  248425 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:36:53.729534  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:53.755116  248425 main.go:143] libmachine: Using SSH client type: native
	I1019 17:36:53.755429  248425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1019 17:36:53.755453  248425 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:36:54.065159  248425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:36:54.065181  248425 machine.go:97] duration metric: took 4.422459665s to provisionDockerMachine
	I1019 17:36:54.065190  248425 client.go:174] duration metric: took 11.809235624s to LocalClient.Create
	I1019 17:36:54.065222  248425 start.go:167] duration metric: took 11.809300947s to libmachine.API.Create "newest-cni-633463"
	I1019 17:36:54.065229  248425 start.go:293] postStartSetup for "newest-cni-633463" (driver="docker")
	I1019 17:36:54.065239  248425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:36:54.065298  248425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:36:54.065343  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:54.095550  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:36:54.201211  248425 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:36:54.205317  248425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:36:54.205349  248425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:36:54.205361  248425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:36:54.205414  248425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:36:54.205492  248425 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:36:54.205602  248425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:36:54.214752  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:36:54.239481  248425 start.go:296] duration metric: took 174.23663ms for postStartSetup
	I1019 17:36:54.239870  248425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:36:54.262196  248425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:36:54.262474  248425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:36:54.262513  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:54.296427  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:36:54.399914  248425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:36:54.407313  248425 start.go:128] duration metric: took 12.156926253s to createHost
	I1019 17:36:54.407340  248425 start.go:83] releasing machines lock for "newest-cni-633463", held for 12.157059826s
	I1019 17:36:54.407427  248425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:36:54.430376  248425 ssh_runner.go:195] Run: cat /version.json
	I1019 17:36:54.430428  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:54.430693  248425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:36:54.430741  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:36:54.468195  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:36:54.474847  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:36:54.700650  248425 ssh_runner.go:195] Run: systemctl --version
	I1019 17:36:54.707333  248425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:36:54.766435  248425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:36:54.774094  248425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:36:54.774207  248425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:36:54.820025  248425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 17:36:54.820099  248425 start.go:496] detecting cgroup driver to use...
	I1019 17:36:54.820147  248425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:36:54.820232  248425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:36:54.844145  248425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:36:54.859929  248425 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:36:54.860039  248425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:36:54.880352  248425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:36:54.901109  248425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:36:55.063844  248425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:36:55.243294  248425 docker.go:234] disabling docker service ...
	I1019 17:36:55.243411  248425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:36:55.278945  248425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:36:55.294477  248425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:36:55.457371  248425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:36:55.620317  248425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:36:55.636213  248425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:36:55.651993  248425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:36:55.652101  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.661806  248425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:36:55.661902  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.672197  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.682868  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.692579  248425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:36:55.701519  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.711031  248425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.735959  248425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:36:55.745536  248425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:36:55.754179  248425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:36:55.762470  248425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:36:55.919895  248425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:36:56.318643  248425 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:36:56.318753  248425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:36:56.329480  248425 start.go:564] Will wait 60s for crictl version
	I1019 17:36:56.329591  248425 ssh_runner.go:195] Run: which crictl
	I1019 17:36:56.333645  248425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:36:56.372963  248425 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:36:56.373106  248425 ssh_runner.go:195] Run: crio --version
	I1019 17:36:56.411124  248425 ssh_runner.go:195] Run: crio --version
	I1019 17:36:56.456881  248425 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:36:56.459981  248425 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:36:56.477246  248425 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:36:56.481398  248425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:36:56.496330  248425 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 17:36:56.499181  248425 kubeadm.go:884] updating cluster {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:36:56.499344  248425 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:36:56.499424  248425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:36:56.561184  248425 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:36:56.561211  248425 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:36:56.561282  248425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:36:56.591367  248425 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:36:56.591396  248425 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:36:56.591405  248425 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:36:56.591548  248425 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-633463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:36:56.591662  248425 ssh_runner.go:195] Run: crio config
	I1019 17:36:56.677580  248425 cni.go:84] Creating CNI manager for ""
	I1019 17:36:56.677602  248425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:36:56.677615  248425 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:36:56.677667  248425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-633463 NodeName:newest-cni-633463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:36:56.677823  248425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-633463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:36:56.677916  248425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:36:56.686575  248425 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:36:56.686673  248425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:36:56.694674  248425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:36:56.708463  248425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:36:56.722814  248425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 17:36:56.737499  248425 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:36:56.741604  248425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:36:56.751779  248425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1019 17:36:53.273298  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:36:55.775280  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	I1019 17:36:56.904570  248425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:36:56.928609  248425 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463 for IP: 192.168.85.2
	I1019 17:36:56.928631  248425 certs.go:195] generating shared ca certs ...
	I1019 17:36:56.928647  248425 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:56.928880  248425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:36:56.928944  248425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:36:56.928958  248425 certs.go:257] generating profile certs ...
	I1019 17:36:56.929031  248425 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key
	I1019 17:36:56.929050  248425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.crt with IP's: []
	I1019 17:36:57.426465  248425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.crt ...
	I1019 17:36:57.426555  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.crt: {Name:mk6c123f4246f1e1cd1ca43dc201560f7a88d8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:57.426734  248425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key ...
	I1019 17:36:57.426774  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key: {Name:mk22dd109812442b8bb42f482f5eee9d390b1c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:57.426889  248425 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287
	I1019 17:36:57.426933  248425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt.1ea41287 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 17:36:58.246279  248425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt.1ea41287 ...
	I1019 17:36:58.246346  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt.1ea41287: {Name:mkfccc8757b4680ef33da3c9a81cbc3bf92f8264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:58.246576  248425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287 ...
	I1019 17:36:58.246615  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287: {Name:mkcb6b726cd4df391021417b6ac0a2644170f9f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:58.246748  248425 certs.go:382] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt.1ea41287 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt
	I1019 17:36:58.246865  248425 certs.go:386] copying /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287 -> /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key
	I1019 17:36:58.246969  248425 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key
	I1019 17:36:58.247015  248425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt with IP's: []
	I1019 17:36:59.641334  248425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt ...
	I1019 17:36:59.641363  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt: {Name:mk64f466818b25c92e6cde4508b8891dacc05377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:59.641564  248425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key ...
	I1019 17:36:59.641579  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key: {Name:mk9305043f1b103018c86fa83c1420b326921372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:36:59.641764  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:36:59.641805  248425 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:36:59.641826  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:36:59.641853  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:36:59.641881  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:36:59.641907  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:36:59.641955  248425 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:36:59.642550  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:36:59.662726  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:36:59.683037  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:36:59.700881  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:36:59.726961  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:36:59.746285  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:36:59.773857  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:36:59.792124  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:36:59.812716  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:36:59.830068  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:36:59.848072  248425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:36:59.866267  248425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:36:59.879603  248425 ssh_runner.go:195] Run: openssl version
	I1019 17:36:59.885770  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:36:59.893942  248425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:59.897943  248425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:59.898057  248425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:36:59.941140  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:36:59.949467  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:36:59.957928  248425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:36:59.961679  248425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:36:59.961762  248425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:37:00.003972  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:37:00.015471  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:37:00.033891  248425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:37:00.041848  248425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:37:00.041977  248425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:37:00.157718  248425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:37:00.184762  248425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:37:00.190806  248425 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 17:37:00.190932  248425 kubeadm.go:401] StartCluster: {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:00.191076  248425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:37:00.191190  248425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:37:00.242792  248425 cri.go:89] found id: ""
	I1019 17:37:00.242954  248425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:37:00.259623  248425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:37:00.285634  248425 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1019 17:37:00.285836  248425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:37:00.299522  248425 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:37:00.299599  248425 kubeadm.go:158] found existing configuration files:
	
	I1019 17:37:00.299733  248425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:37:00.326954  248425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:37:00.327132  248425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:37:00.345959  248425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:37:00.363195  248425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:37:00.363331  248425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:37:00.374687  248425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:37:00.386118  248425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:37:00.386307  248425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:37:00.398228  248425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:37:00.414199  248425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:37:00.414293  248425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:37:00.429674  248425 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 17:37:00.515076  248425 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 17:37:00.515381  248425 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 17:37:00.606104  248425 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1019 17:36:58.270651  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:00.295542  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:02.768353  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:04.768872  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:06.772290  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:09.269551  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:11.269960  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	I1019 17:37:15.708684  248425 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 17:37:15.708752  248425 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 17:37:15.708865  248425 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1019 17:37:15.708955  248425 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 17:37:15.709003  248425 kubeadm.go:319] OS: Linux
	I1019 17:37:15.709053  248425 kubeadm.go:319] CGROUPS_CPU: enabled
	I1019 17:37:15.709107  248425 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1019 17:37:15.709178  248425 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1019 17:37:15.709243  248425 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1019 17:37:15.709301  248425 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1019 17:37:15.709356  248425 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1019 17:37:15.709427  248425 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1019 17:37:15.709497  248425 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1019 17:37:15.709557  248425 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1019 17:37:15.709644  248425 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 17:37:15.709749  248425 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 17:37:15.709845  248425 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 17:37:15.709913  248425 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 17:37:15.714908  248425 out.go:252]   - Generating certificates and keys ...
	I1019 17:37:15.715009  248425 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 17:37:15.715082  248425 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 17:37:15.715159  248425 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 17:37:15.715222  248425 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 17:37:15.715298  248425 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 17:37:15.715354  248425 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 17:37:15.715414  248425 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 17:37:15.715545  248425 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-633463] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:37:15.715605  248425 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 17:37:15.715733  248425 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-633463] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 17:37:15.715804  248425 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 17:37:15.715873  248425 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 17:37:15.715945  248425 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 17:37:15.716007  248425 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 17:37:15.716064  248425 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 17:37:15.716126  248425 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 17:37:15.716194  248425 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 17:37:15.716266  248425 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 17:37:15.716342  248425 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 17:37:15.716435  248425 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 17:37:15.716509  248425 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 17:37:15.719507  248425 out.go:252]   - Booting up control plane ...
	I1019 17:37:15.719638  248425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 17:37:15.719728  248425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 17:37:15.719804  248425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 17:37:15.719964  248425 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 17:37:15.720074  248425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 17:37:15.720209  248425 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 17:37:15.720304  248425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 17:37:15.720358  248425 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 17:37:15.720505  248425 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 17:37:15.720634  248425 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 17:37:15.720725  248425 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000858552s
	I1019 17:37:15.720850  248425 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:37:15.720954  248425 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 17:37:15.721062  248425 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:37:15.721146  248425 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:37:15.721225  248425 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.082165812s
	I1019 17:37:15.721300  248425 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.346871924s
	I1019 17:37:15.721370  248425 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501970659s
	I1019 17:37:15.721481  248425 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:37:15.721624  248425 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:37:15.721704  248425 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:37:15.721897  248425 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-633463 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:37:15.721960  248425 kubeadm.go:319] [bootstrap-token] Using token: zd48oh.9s454kuq25g47wc2
	I1019 17:37:15.725030  248425 out.go:252]   - Configuring RBAC rules ...
	I1019 17:37:15.725148  248425 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:37:15.725263  248425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:37:15.725427  248425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:37:15.725561  248425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:37:15.725681  248425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:37:15.725770  248425 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:37:15.725890  248425 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:37:15.725936  248425 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:37:15.725983  248425 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:37:15.725987  248425 kubeadm.go:319] 
	I1019 17:37:15.726051  248425 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:37:15.726055  248425 kubeadm.go:319] 
	I1019 17:37:15.726136  248425 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:37:15.726140  248425 kubeadm.go:319] 
	I1019 17:37:15.726167  248425 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:37:15.726229  248425 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:37:15.726282  248425 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:37:15.726286  248425 kubeadm.go:319] 
	I1019 17:37:15.726343  248425 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:37:15.726347  248425 kubeadm.go:319] 
	I1019 17:37:15.726397  248425 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:37:15.726401  248425 kubeadm.go:319] 
	I1019 17:37:15.726456  248425 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:37:15.726696  248425 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:37:15.726829  248425 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:37:15.726838  248425 kubeadm.go:319] 
	I1019 17:37:15.726935  248425 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:37:15.727021  248425 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:37:15.727025  248425 kubeadm.go:319] 
	I1019 17:37:15.727119  248425 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zd48oh.9s454kuq25g47wc2 \
	I1019 17:37:15.727262  248425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 \
	I1019 17:37:15.727286  248425 kubeadm.go:319] 	--control-plane 
	I1019 17:37:15.727291  248425 kubeadm.go:319] 
	I1019 17:37:15.727385  248425 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:37:15.727389  248425 kubeadm.go:319] 
	I1019 17:37:15.727481  248425 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zd48oh.9s454kuq25g47wc2 \
	I1019 17:37:15.727609  248425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e46e32887dad4fb3652c11cff3bedf8db657b48a4edf5ac902ac886eacf392c8 
	I1019 17:37:15.727617  248425 cni.go:84] Creating CNI manager for ""
	I1019 17:37:15.727624  248425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:15.730787  248425 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:37:15.733643  248425 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:37:15.738181  248425 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:37:15.738201  248425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:37:15.751299  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:37:16.080420  248425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:37:16.080534  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:16.080557  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-633463 minikube.k8s.io/updated_at=2025_10_19T17_37_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=newest-cni-633463 minikube.k8s.io/primary=true
	I1019 17:37:16.097198  248425 ops.go:34] apiserver oom_adj: -16
	I1019 17:37:16.234083  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:16.734425  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 17:37:13.768007  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:15.768484  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	W1019 17:37:18.269132  245420 pod_ready.go:104] pod "coredns-66bc5c9577-vjhwx" is not "Ready", error: <nil>
	I1019 17:37:19.281799  245420 pod_ready.go:94] pod "coredns-66bc5c9577-vjhwx" is "Ready"
	I1019 17:37:19.281824  245420 pod_ready.go:86] duration metric: took 37.019775965s for pod "coredns-66bc5c9577-vjhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.314271  245420 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.320525  245420 pod_ready.go:94] pod "etcd-default-k8s-diff-port-370596" is "Ready"
	I1019 17:37:19.320599  245420 pod_ready.go:86] duration metric: took 6.303756ms for pod "etcd-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.324094  245420 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.332561  245420 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-370596" is "Ready"
	I1019 17:37:19.332586  245420 pod_ready.go:86] duration metric: took 8.470078ms for pod "kube-apiserver-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.339344  245420 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.466717  245420 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-370596" is "Ready"
	I1019 17:37:19.466745  245420 pod_ready.go:86] duration metric: took 127.377525ms for pod "kube-controller-manager-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:19.665866  245420 pod_ready.go:83] waiting for pod "kube-proxy-24xql" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:20.065807  245420 pod_ready.go:94] pod "kube-proxy-24xql" is "Ready"
	I1019 17:37:20.065847  245420 pod_ready.go:86] duration metric: took 399.954678ms for pod "kube-proxy-24xql" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:20.266294  245420 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:20.667622  245420 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-370596" is "Ready"
	I1019 17:37:20.667650  245420 pod_ready.go:86] duration metric: took 401.330251ms for pod "kube-scheduler-default-k8s-diff-port-370596" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:37:20.667662  245420 pod_ready.go:40] duration metric: took 38.410311537s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:37:20.753190  245420 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:37:20.756115  245420 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-370596" cluster and "default" namespace by default
	I1019 17:37:17.234198  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:17.735153  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:18.235064  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:18.734899  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:19.234166  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:19.734182  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:20.234434  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:20.735010  248425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:37:20.918936  248425 kubeadm.go:1114] duration metric: took 4.838512091s to wait for elevateKubeSystemPrivileges
	I1019 17:37:20.918962  248425 kubeadm.go:403] duration metric: took 20.728034934s to StartCluster
	I1019 17:37:20.918979  248425 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:20.919036  248425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:20.920027  248425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:20.920238  248425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:37:20.920383  248425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:37:20.921890  248425 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:20.926451  248425 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:37:20.926562  248425 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-633463"
	I1019 17:37:20.926579  248425 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-633463"
	I1019 17:37:20.926615  248425 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:20.927177  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:20.930351  248425 out.go:179] * Verifying Kubernetes components...
	I1019 17:37:20.934827  248425 addons.go:70] Setting default-storageclass=true in profile "newest-cni-633463"
	I1019 17:37:20.934855  248425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-633463"
	I1019 17:37:20.935530  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:20.935767  248425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:21.003510  248425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:37:21.006868  248425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:21.006902  248425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:37:21.006976  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:21.010249  248425 addons.go:239] Setting addon default-storageclass=true in "newest-cni-633463"
	I1019 17:37:21.010299  248425 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:21.011522  248425 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:21.062772  248425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:21.062796  248425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:37:21.062857  248425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:21.073398  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:21.096679  248425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:21.448529  248425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:21.507514  248425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:37:21.507683  248425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:21.575821  248425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:22.212720  248425 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:37:22.212782  248425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:37:22.212880  248425 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 17:37:22.244793  248425 api_server.go:72] duration metric: took 1.324528451s to wait for apiserver process to appear ...
	I1019 17:37:22.244857  248425 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:37:22.244889  248425 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:37:22.261384  248425 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:37:22.263386  248425 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:37:22.264992  248425 addons.go:515] duration metric: took 1.338539789s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:37:22.266951  248425 api_server.go:141] control plane version: v1.34.1
	I1019 17:37:22.267015  248425 api_server.go:131] duration metric: took 22.136706ms to wait for apiserver health ...
	I1019 17:37:22.267041  248425 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:37:22.273075  248425 system_pods.go:59] 9 kube-system pods found
	I1019 17:37:22.273159  248425 system_pods.go:61] "coredns-66bc5c9577-brsql" [ceff5f1d-19f2-41fa-b367-b9edce9e3019] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:22.273185  248425 system_pods.go:61] "coredns-66bc5c9577-c4f4b" [05111d3d-bb2d-418d-8839-fd77dd6da259] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:22.273208  248425 system_pods.go:61] "etcd-newest-cni-633463" [6a5e2105-f5b2-42fe-b84e-b4fabe762787] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:37:22.273232  248425 system_pods.go:61] "kindnet-9zt9r" [225c1116-2e3f-4fe7-93d6-b3199509c1a8] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 17:37:22.273272  248425 system_pods.go:61] "kube-apiserver-newest-cni-633463" [ed52c336-ad74-4a2b-b340-80f71537080a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:37:22.273312  248425 system_pods.go:61] "kube-controller-manager-newest-cni-633463" [99395d0f-9a8b-4874-a0cc-9e1d8f64950e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:37:22.273334  248425 system_pods.go:61] "kube-proxy-gktcz" [ddc682d3-91d8-48e5-b254-cbb87e6f5106] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:37:22.273365  248425 system_pods.go:61] "kube-scheduler-newest-cni-633463" [f1e717aa-1eee-48e8-a48b-8980e8389603] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:37:22.273388  248425 system_pods.go:61] "storage-provisioner" [ba44ef1f-311c-409e-a01b-f15080f8ac35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:22.273418  248425 system_pods.go:74] duration metric: took 6.359429ms to wait for pod list to return data ...
	I1019 17:37:22.273439  248425 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:37:22.278032  248425 default_sa.go:45] found service account: "default"
	I1019 17:37:22.278105  248425 default_sa.go:55] duration metric: took 4.63838ms for default service account to be created ...
	I1019 17:37:22.278133  248425 kubeadm.go:587] duration metric: took 1.357871781s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:22.278163  248425 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:37:22.288242  248425 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:37:22.288325  248425 node_conditions.go:123] node cpu capacity is 2
	I1019 17:37:22.288353  248425 node_conditions.go:105] duration metric: took 10.145383ms to run NodePressure ...
	I1019 17:37:22.288379  248425 start.go:242] waiting for startup goroutines ...
	I1019 17:37:22.717520  248425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-633463" context rescaled to 1 replicas
	I1019 17:37:22.717555  248425 start.go:247] waiting for cluster config update ...
	I1019 17:37:22.717575  248425 start.go:256] writing updated cluster config ...
	I1019 17:37:22.717869  248425 ssh_runner.go:195] Run: rm -f paused
	I1019 17:37:22.788938  248425 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:37:22.794189  248425 out.go:179] * Done! kubectl is now configured to use "newest-cni-633463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.737439764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.746619776Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=92c2874f-05e1-42f0-9ee0-6b9b34ecb7a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.75697783Z" level=info msg="Ran pod sandbox 89a3ef7489cbf764d987087c80378698c92e4657003f6f408846a9b211da8141 with infra container: kube-system/kindnet-9zt9r/POD" id=92c2874f-05e1-42f0-9ee0-6b9b34ecb7a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.762102871Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=44be1436-97e0-4833-af42-71156b94a293 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.763557159Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=706c3f00-fc7c-4aa5-8960-d897c6774ea0 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.769782974Z" level=info msg="Creating container: kube-system/kindnet-9zt9r/kindnet-cni" id=b9dc7f24-0a6e-4bfd-b61a-d622ee0964f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.770062042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.775100895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.775626227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.791870055Z" level=info msg="Created container d9c4e60bcd368f23d9cf75a318f627f770e4cc60c43d1ffa46895675011c4c75: kube-system/kindnet-9zt9r/kindnet-cni" id=b9dc7f24-0a6e-4bfd-b61a-d622ee0964f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.792973905Z" level=info msg="Starting container: d9c4e60bcd368f23d9cf75a318f627f770e4cc60c43d1ffa46895675011c4c75" id=13bb2289-e86f-4687-9219-c404346423df name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.7951372Z" level=info msg="Started container" PID=1495 containerID=d9c4e60bcd368f23d9cf75a318f627f770e4cc60c43d1ffa46895675011c4c75 description=kube-system/kindnet-9zt9r/kindnet-cni id=13bb2289-e86f-4687-9219-c404346423df name=/runtime.v1.RuntimeService/StartContainer sandboxID=89a3ef7489cbf764d987087c80378698c92e4657003f6f408846a9b211da8141
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.844623359Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gktcz/POD" id=61b6dd4c-bd38-4a3c-94b1-676352e44adc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.844698568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.848574288Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=61b6dd4c-bd38-4a3c-94b1-676352e44adc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.853931118Z" level=info msg="Ran pod sandbox 77c2126ea91006d640f9a8bc301c02325a0b058dffb00e40e84101c86f3cba2a with infra container: kube-system/kube-proxy-gktcz/POD" id=61b6dd4c-bd38-4a3c-94b1-676352e44adc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.855221023Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1258f18b-1923-48f4-85c3-a005635ab8d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.856570645Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=348dbec4-0ad3-45a5-9e2e-afe4d1594517 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.862257039Z" level=info msg="Creating container: kube-system/kube-proxy-gktcz/kube-proxy" id=eb55a700-5933-4345-a976-f4d69561a97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.862565597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.869045528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.869607381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.916654418Z" level=info msg="Created container 7767fdbf581a4b6f88c3547b6810a582dcba89548c6fde348952ffacc6b1633b: kube-system/kube-proxy-gktcz/kube-proxy" id=eb55a700-5933-4345-a976-f4d69561a97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.917716258Z" level=info msg="Starting container: 7767fdbf581a4b6f88c3547b6810a582dcba89548c6fde348952ffacc6b1633b" id=537960ea-28d8-4e3c-bf58-47f84a97010d name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:21 newest-cni-633463 crio[845]: time="2025-10-19T17:37:21.921514438Z" level=info msg="Started container" PID=1507 containerID=7767fdbf581a4b6f88c3547b6810a582dcba89548c6fde348952ffacc6b1633b description=kube-system/kube-proxy-gktcz/kube-proxy id=537960ea-28d8-4e3c-bf58-47f84a97010d name=/runtime.v1.RuntimeService/StartContainer sandboxID=77c2126ea91006d640f9a8bc301c02325a0b058dffb00e40e84101c86f3cba2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7767fdbf581a4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   77c2126ea9100       kube-proxy-gktcz                            kube-system
	d9c4e60bcd368       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   89a3ef7489cbf       kindnet-9zt9r                               kube-system
	ead301899b101       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   4660e6dda8ac2       kube-scheduler-newest-cni-633463            kube-system
	faca03bd5ef97       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   c51064f8899e7       kube-controller-manager-newest-cni-633463   kube-system
	29322179b8143       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   e662d21d30185       etcd-newest-cni-633463                      kube-system
	110aff2eb0aa1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   ac11f1f7463bc       kube-apiserver-newest-cni-633463            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-633463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-633463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-633463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:37:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-633463
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:37:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:37:15 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:37:15 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:37:15 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:37:15 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-633463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e953ded1-d3da-4e1a-97c3-cbeb95b772c3
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-633463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-9zt9r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-633463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-633463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-gktcz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-633463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-633463 event: Registered Node newest-cni-633463 in Controller
	
	
	==> dmesg <==
	[Oct19 17:15] overlayfs: idmapped layers are currently not supported
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	[Oct19 17:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29322179b8143d24d0aad7b0b86aa45933c8d03c8659c34ec44dd3c43d81a1a8] <==
	{"level":"warn","ts":"2025-10-19T17:37:10.434930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.456113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.481083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.497245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.517009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.540479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.556810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.581182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.602095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.636641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.673574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.699032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.714775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.749635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.771051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.790316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.809672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.824379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.838903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.864841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.887487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.923920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.932710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:10.948920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:11.057024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:37:24 up  1:19,  0 user,  load average: 4.79, 4.10, 3.59
	Linux newest-cni-633463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9c4e60bcd368f23d9cf75a318f627f770e4cc60c43d1ffa46895675011c4c75] <==
	I1019 17:37:21.906762       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:37:21.907025       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:37:21.907151       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:37:21.907163       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:37:21.907173       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:37:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:37:22.195474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:37:22.195634       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:37:22.195694       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:37:22.196526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [110aff2eb0aa11b03ada696ec97681615fd82df1c77ba62ceb639d9b8da58e8b] <==
	I1019 17:37:12.662628       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:37:12.662635       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:37:12.662642       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:37:12.692354       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:37:12.697307       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 17:37:12.749382       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:37:12.754680       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:37:12.829820       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:37:13.087185       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 17:37:13.099118       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 17:37:13.099224       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:37:13.930721       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:37:14.005467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:37:14.077828       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 17:37:14.089666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 17:37:14.091124       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 17:37:14.097160       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:37:14.900919       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:37:15.127093       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:37:15.148519       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 17:37:15.158080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:37:20.544064       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:37:20.596468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:37:20.603424       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:37:20.951434       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [faca03bd5ef9738481010de3e99e47f5054ed92f8a73011f625acc0c72778fbd] <==
	I1019 17:37:19.938897       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:37:19.938925       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:37:19.939159       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 17:37:19.939272       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 17:37:19.939309       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:37:19.939345       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:37:19.939838       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:19.940882       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:37:19.940927       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:37:19.942648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:37:19.942929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 17:37:19.942998       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:37:19.943053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 17:37:19.943104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:37:19.944097       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:37:19.944466       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:37:19.944526       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:37:19.944610       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:37:19.944696       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-633463"
	I1019 17:37:19.944771       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:37:19.947565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:37:19.957352       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:37:19.978341       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:19.978454       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:37:19.978486       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [7767fdbf581a4b6f88c3547b6810a582dcba89548c6fde348952ffacc6b1633b] <==
	I1019 17:37:21.997812       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:37:22.097382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:37:22.199545       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:37:22.203082       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:37:22.203201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:37:22.345508       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:37:22.345631       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:37:22.351315       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:37:22.351697       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:37:22.351773       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:22.356380       1 config.go:200] "Starting service config controller"
	I1019 17:37:22.356468       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:37:22.356517       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:37:22.356562       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:37:22.356600       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:37:22.356635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:37:22.362007       1 config.go:309] "Starting node config controller"
	I1019 17:37:22.378434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:37:22.380856       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:37:22.457359       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:37:22.457407       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:37:22.457359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ead301899b101917281426d463d6233e176915ebe149e02f468f7cf4719c1692] <==
	I1019 17:37:13.116587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:13.121262       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:37:13.121428       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:13.121472       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:13.121513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 17:37:13.124992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 17:37:13.125080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 17:37:13.125141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 17:37:13.130915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 17:37:13.133626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 17:37:13.133719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 17:37:13.133772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 17:37:13.133838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 17:37:13.133904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 17:37:13.133949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 17:37:13.134071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 17:37:13.134116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 17:37:13.134169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 17:37:13.134211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 17:37:13.134377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 17:37:13.134422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 17:37:13.134453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 17:37:13.134497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 17:37:13.134578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1019 17:37:14.622373       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: I1019 17:37:16.293734    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: I1019 17:37:16.293972    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: E1019 17:37:16.319278    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-633463\" already exists" pod="kube-system/kube-controller-manager-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: E1019 17:37:16.323065    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-633463\" already exists" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: E1019 17:37:16.323354    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-633463\" already exists" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: E1019 17:37:16.323530    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-633463\" already exists" pod="kube-system/kube-apiserver-newest-cni-633463"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: I1019 17:37:16.352694    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-633463" podStartSLOduration=1.352673643 podStartE2EDuration="1.352673643s" podCreationTimestamp="2025-10-19 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:16.339169372 +0000 UTC m=+1.337153452" watchObservedRunningTime="2025-10-19 17:37:16.352673643 +0000 UTC m=+1.350657723"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: I1019 17:37:16.367442    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-633463" podStartSLOduration=1.367422288 podStartE2EDuration="1.367422288s" podCreationTimestamp="2025-10-19 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:16.352905901 +0000 UTC m=+1.350889973" watchObservedRunningTime="2025-10-19 17:37:16.367422288 +0000 UTC m=+1.365406368"
	Oct 19 17:37:16 newest-cni-633463 kubelet[1310]: I1019 17:37:16.381326    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-633463" podStartSLOduration=1.381305912 podStartE2EDuration="1.381305912s" podCreationTimestamp="2025-10-19 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:16.367846753 +0000 UTC m=+1.365830825" watchObservedRunningTime="2025-10-19 17:37:16.381305912 +0000 UTC m=+1.379289984"
	Oct 19 17:37:19 newest-cni-633463 kubelet[1310]: I1019 17:37:19.949580    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:37:19 newest-cni-633463 kubelet[1310]: I1019 17:37:19.950164    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.127845    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-633463" podStartSLOduration=6.127825805 podStartE2EDuration="6.127825805s" podCreationTimestamp="2025-10-19 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:16.381799187 +0000 UTC m=+1.379783267" watchObservedRunningTime="2025-10-19 17:37:21.127825805 +0000 UTC m=+6.125809885"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.285730    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-cni-cfg\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.285787    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-xtables-lock\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.285815    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-lib-modules\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.285839    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8tg\" (UniqueName: \"kubernetes.io/projected/225c1116-2e3f-4fe7-93d6-b3199509c1a8-kube-api-access-wz8tg\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.403457    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-lib-modules\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.413419    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqm5\" (UniqueName: \"kubernetes.io/projected/ddc682d3-91d8-48e5-b254-cbb87e6f5106-kube-api-access-5xqm5\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.413779    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddc682d3-91d8-48e5-b254-cbb87e6f5106-kube-proxy\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.413908    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-xtables-lock\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: I1019 17:37:21.473053    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: W1019 17:37:21.755184    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/crio-89a3ef7489cbf764d987087c80378698c92e4657003f6f408846a9b211da8141 WatchSource:0}: Error finding container 89a3ef7489cbf764d987087c80378698c92e4657003f6f408846a9b211da8141: Status 404 returned error can't find the container with id 89a3ef7489cbf764d987087c80378698c92e4657003f6f408846a9b211da8141
	Oct 19 17:37:21 newest-cni-633463 kubelet[1310]: W1019 17:37:21.851718    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/crio-77c2126ea91006d640f9a8bc301c02325a0b058dffb00e40e84101c86f3cba2a WatchSource:0}: Error finding container 77c2126ea91006d640f9a8bc301c02325a0b058dffb00e40e84101c86f3cba2a: Status 404 returned error can't find the container with id 77c2126ea91006d640f9a8bc301c02325a0b058dffb00e40e84101c86f3cba2a
	Oct 19 17:37:22 newest-cni-633463 kubelet[1310]: I1019 17:37:22.369931    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9zt9r" podStartSLOduration=2.369899244 podStartE2EDuration="2.369899244s" podCreationTimestamp="2025-10-19 17:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:22.369721929 +0000 UTC m=+7.367706001" watchObservedRunningTime="2025-10-19 17:37:22.369899244 +0000 UTC m=+7.367883316"
	Oct 19 17:37:22 newest-cni-633463 kubelet[1310]: I1019 17:37:22.370601    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gktcz" podStartSLOduration=2.370588041 podStartE2EDuration="2.370588041s" podCreationTimestamp="2025-10-19 17:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 17:37:22.339979706 +0000 UTC m=+7.337963786" watchObservedRunningTime="2025-10-19 17:37:22.370588041 +0000 UTC m=+7.368572146"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-633463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-c4f4b storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner: exit status 1 (90.639872ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-c4f4b" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-370596 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-370596 --alsologtostderr -v=1: exit status 80 (2.148283317s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-370596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:37:32.888366  252752 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:32.888542  252752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:32.888554  252752 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:32.888560  252752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:32.888886  252752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:32.889186  252752 out.go:368] Setting JSON to false
	I1019 17:37:32.889228  252752 mustload.go:66] Loading cluster: default-k8s-diff-port-370596
	I1019 17:37:32.889663  252752 config.go:182] Loaded profile config "default-k8s-diff-port-370596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:32.890262  252752 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-370596 --format={{.State.Status}}
	I1019 17:37:32.910796  252752 host.go:66] Checking if "default-k8s-diff-port-370596" exists ...
	I1019 17:37:32.911110  252752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:32.998779  252752 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 17:37:32.987188516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:32.999474  252752 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-370596 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:37:33.006950  252752 out.go:179] * Pausing node default-k8s-diff-port-370596 ... 
	I1019 17:37:33.010401  252752 host.go:66] Checking if "default-k8s-diff-port-370596" exists ...
	I1019 17:37:33.010945  252752 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:33.010996  252752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-370596
	I1019 17:37:33.039834  252752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/default-k8s-diff-port-370596/id_rsa Username:docker}
	I1019 17:37:33.153913  252752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:33.195502  252752 pause.go:52] kubelet running: true
	I1019 17:37:33.195665  252752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:33.546339  252752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:33.546423  252752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:33.625061  252752 cri.go:89] found id: "7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520"
	I1019 17:37:33.625080  252752 cri.go:89] found id: "30db141fa264b9a802684de3150779c5736b374899eb2f97d8dba30adc88c7d3"
	I1019 17:37:33.625085  252752 cri.go:89] found id: "f619f61aa27749c75021a3b43e6cb29266fc888091e65a14f87ce98a9c5c2415"
	I1019 17:37:33.625089  252752 cri.go:89] found id: "1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d"
	I1019 17:37:33.625093  252752 cri.go:89] found id: "d063568e642486afd257c23bc8b0d1fed9f45edb969d3248797e8792e9999f52"
	I1019 17:37:33.625097  252752 cri.go:89] found id: "5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c"
	I1019 17:37:33.625100  252752 cri.go:89] found id: "aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b"
	I1019 17:37:33.625103  252752 cri.go:89] found id: "d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe"
	I1019 17:37:33.625106  252752 cri.go:89] found id: "195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd"
	I1019 17:37:33.625113  252752 cri.go:89] found id: "7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	I1019 17:37:33.625116  252752 cri.go:89] found id: "d7e161aadc0e1cf960ad0ec63481467bb06b279f48656ae79ae0f9977a3fb9b9"
	I1019 17:37:33.625119  252752 cri.go:89] found id: ""
	I1019 17:37:33.625173  252752 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:33.643188  252752 retry.go:31] will retry after 309.014299ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:33Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:33.952395  252752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:33.966480  252752 pause.go:52] kubelet running: false
	I1019 17:37:33.966555  252752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:34.209190  252752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:34.209281  252752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:34.292589  252752 cri.go:89] found id: "7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520"
	I1019 17:37:34.292613  252752 cri.go:89] found id: "30db141fa264b9a802684de3150779c5736b374899eb2f97d8dba30adc88c7d3"
	I1019 17:37:34.292620  252752 cri.go:89] found id: "f619f61aa27749c75021a3b43e6cb29266fc888091e65a14f87ce98a9c5c2415"
	I1019 17:37:34.292624  252752 cri.go:89] found id: "1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d"
	I1019 17:37:34.292627  252752 cri.go:89] found id: "d063568e642486afd257c23bc8b0d1fed9f45edb969d3248797e8792e9999f52"
	I1019 17:37:34.292631  252752 cri.go:89] found id: "5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c"
	I1019 17:37:34.292634  252752 cri.go:89] found id: "aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b"
	I1019 17:37:34.292637  252752 cri.go:89] found id: "d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe"
	I1019 17:37:34.292640  252752 cri.go:89] found id: "195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd"
	I1019 17:37:34.292646  252752 cri.go:89] found id: "7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	I1019 17:37:34.292649  252752 cri.go:89] found id: "d7e161aadc0e1cf960ad0ec63481467bb06b279f48656ae79ae0f9977a3fb9b9"
	I1019 17:37:34.292653  252752 cri.go:89] found id: ""
	I1019 17:37:34.292714  252752 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:34.306749  252752 retry.go:31] will retry after 294.659441ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:34Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:34.602254  252752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:34.617994  252752 pause.go:52] kubelet running: false
	I1019 17:37:34.618060  252752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:34.837835  252752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:34.837911  252752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:34.927428  252752 cri.go:89] found id: "7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520"
	I1019 17:37:34.927453  252752 cri.go:89] found id: "30db141fa264b9a802684de3150779c5736b374899eb2f97d8dba30adc88c7d3"
	I1019 17:37:34.927458  252752 cri.go:89] found id: "f619f61aa27749c75021a3b43e6cb29266fc888091e65a14f87ce98a9c5c2415"
	I1019 17:37:34.927462  252752 cri.go:89] found id: "1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d"
	I1019 17:37:34.927465  252752 cri.go:89] found id: "d063568e642486afd257c23bc8b0d1fed9f45edb969d3248797e8792e9999f52"
	I1019 17:37:34.927468  252752 cri.go:89] found id: "5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c"
	I1019 17:37:34.927471  252752 cri.go:89] found id: "aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b"
	I1019 17:37:34.927474  252752 cri.go:89] found id: "d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe"
	I1019 17:37:34.927477  252752 cri.go:89] found id: "195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd"
	I1019 17:37:34.927483  252752 cri.go:89] found id: "7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	I1019 17:37:34.927486  252752 cri.go:89] found id: "d7e161aadc0e1cf960ad0ec63481467bb06b279f48656ae79ae0f9977a3fb9b9"
	I1019 17:37:34.927489  252752 cri.go:89] found id: ""
	I1019 17:37:34.927534  252752 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:34.946763  252752 out.go:203] 
	W1019 17:37:34.949693  252752 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:37:34.949720  252752 out.go:285] * 
	* 
	W1019 17:37:34.955042  252752 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:37:34.958305  252752 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-370596 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-370596
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-370596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	        "Created": "2025-10-19T17:34:41.755702895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245546,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:36:23.345682306Z",
	            "FinishedAt": "2025-10-19T17:36:22.471881965Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47-json.log",
	        "Name": "/default-k8s-diff-port-370596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-370596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-370596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	                "LowerDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-370596",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-370596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-370596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3112ffc5aaf2727c74f5f2a1d944a1aac02abc076e428800bcb16573c07878b5",
	            "SandboxKey": "/var/run/docker/netns/3112ffc5aaf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-370596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:53:4d:17:8f:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ae64488c7e77a883b5d278e8675d09c05353cf5ff587cc6ffef79a9a972f538",
	                    "EndpointID": "01aa326daa410857d85d7442e9898287ce6da1f50ca62f7d35cf59e32c7d1637",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-370596",
	                        "fe1a19329d9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596: exit status 2 (490.964306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25
E1019 17:37:37.079747    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25: (1.961808566s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                                                                                                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ stop    │ -p newest-cni-633463 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-633463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ image   │ default-k8s-diff-port-370596 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p default-k8s-diff-port-370596 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:37:27.032239  252004 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:27.032438  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032469  252004 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:27.032495  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032763  252004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:27.033178  252004 out.go:368] Setting JSON to false
	I1019 17:37:27.034113  252004 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4795,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:37:27.034212  252004 start.go:143] virtualization:  
	I1019 17:37:27.039794  252004 out.go:179] * [newest-cni-633463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:37:27.043053  252004 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:37:27.043135  252004 notify.go:221] Checking for updates...
	I1019 17:37:27.049102  252004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:37:27.051961  252004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:27.054936  252004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:37:27.057816  252004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:37:27.060704  252004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:37:27.063995  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:27.064614  252004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:37:27.096144  252004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:37:27.096298  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.151308  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.14172198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.151422  252004 docker.go:319] overlay module found
	I1019 17:37:27.154606  252004 out.go:179] * Using the docker driver based on existing profile
	I1019 17:37:27.157314  252004 start.go:309] selected driver: docker
	I1019 17:37:27.157331  252004 start.go:930] validating driver "docker" against &{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.157428  252004 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:37:27.158143  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.221005  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.211547247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.221371  252004 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:27.221408  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:27.221460  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:27.221500  252004 start.go:353] cluster config:
	{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.224716  252004 out.go:179] * Starting "newest-cni-633463" primary control-plane node in "newest-cni-633463" cluster
	I1019 17:37:27.227526  252004 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:37:27.230600  252004 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:37:27.233300  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:27.233356  252004 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:37:27.233386  252004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:37:27.233392  252004 cache.go:59] Caching tarball of preloaded images
	I1019 17:37:27.233571  252004 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:37:27.233580  252004 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:37:27.233695  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.253189  252004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:37:27.253215  252004 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:37:27.253229  252004 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:37:27.253253  252004 start.go:360] acquireMachinesLock for newest-cni-633463: {Name:mk5bb6cb5b9b89fc5f7e65da679c1a55c56b4fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:37:27.253329  252004 start.go:364] duration metric: took 36.292µs to acquireMachinesLock for "newest-cni-633463"
	I1019 17:37:27.253353  252004 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:37:27.253363  252004 fix.go:54] fixHost starting: 
	I1019 17:37:27.253610  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.270778  252004 fix.go:112] recreateIfNeeded on newest-cni-633463: state=Stopped err=<nil>
	W1019 17:37:27.270810  252004 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:37:27.274260  252004 out.go:252] * Restarting existing docker container for "newest-cni-633463" ...
	I1019 17:37:27.274384  252004 cli_runner.go:164] Run: docker start newest-cni-633463
	I1019 17:37:27.550197  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.572138  252004 kic.go:430] container "newest-cni-633463" state is running.
	I1019 17:37:27.572522  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:27.598627  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.598852  252004 machine.go:94] provisionDockerMachine start ...
	I1019 17:37:27.598917  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:27.621212  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:27.621530  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:27.621539  252004 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:37:27.622140  252004 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35560->127.0.0.1:33128: read: connection reset by peer
	I1019 17:37:30.774430  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.774458  252004 ubuntu.go:182] provisioning hostname "newest-cni-633463"
	I1019 17:37:30.774529  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.793360  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.793655  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.793671  252004 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-633463 && echo "newest-cni-633463" | sudo tee /etc/hostname
	I1019 17:37:30.956546  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.956622  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.978519  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.978856  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.978879  252004 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-633463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-633463/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-633463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:37:31.143453  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:37:31.143482  252004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:37:31.143503  252004 ubuntu.go:190] setting up certificates
	I1019 17:37:31.143530  252004 provision.go:84] configureAuth start
	I1019 17:37:31.143603  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:31.162905  252004 provision.go:143] copyHostCerts
	I1019 17:37:31.162977  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:37:31.163001  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:37:31.163081  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:37:31.163199  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:37:31.163210  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:37:31.163237  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:37:31.163303  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:37:31.163313  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:37:31.163341  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:37:31.163402  252004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-633463 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-633463]
	I1019 17:37:32.238364  252004 provision.go:177] copyRemoteCerts
	I1019 17:37:32.238433  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:37:32.238477  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.259782  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.363454  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:37:32.382228  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:37:32.402346  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:37:32.434653  252004 provision.go:87] duration metric: took 1.291102282s to configureAuth
	I1019 17:37:32.434677  252004 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:37:32.434877  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:32.434994  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.460163  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:32.460471  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:32.460484  252004 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:37:32.812582  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:37:32.812601  252004 machine.go:97] duration metric: took 5.213739158s to provisionDockerMachine
	I1019 17:37:32.812612  252004 start.go:293] postStartSetup for "newest-cni-633463" (driver="docker")
	I1019 17:37:32.812623  252004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:37:32.812687  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:37:32.812731  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.845647  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.958253  252004 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:37:32.962641  252004 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:37:32.962669  252004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:37:32.962681  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:37:32.962741  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:37:32.962825  252004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:37:32.962929  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:37:32.982498  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:33.019033  252004 start.go:296] duration metric: took 206.405729ms for postStartSetup
	I1019 17:37:33.019119  252004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:37:33.019182  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.060276  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.167952  252004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:37:33.176969  252004 fix.go:56] duration metric: took 5.923599942s for fixHost
	I1019 17:37:33.176995  252004 start.go:83] releasing machines lock for "newest-cni-633463", held for 5.923653801s
	I1019 17:37:33.177082  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:33.203375  252004 ssh_runner.go:195] Run: cat /version.json
	I1019 17:37:33.203411  252004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:37:33.203489  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.203426  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.248837  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.249412  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.469299  252004 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:33.477000  252004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:37:33.515118  252004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:37:33.520482  252004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:37:33.520556  252004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:37:33.529508  252004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:37:33.529534  252004 start.go:496] detecting cgroup driver to use...
	I1019 17:37:33.529596  252004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:37:33.529659  252004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:37:33.550450  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:37:33.567224  252004 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:37:33.567330  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:37:33.587367  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:37:33.603412  252004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:37:33.721296  252004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:37:33.835246  252004 docker.go:234] disabling docker service ...
	I1019 17:37:33.835350  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:37:33.850207  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:37:33.864410  252004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:37:33.985866  252004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:37:34.153123  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:37:34.167027  252004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:37:34.183139  252004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:37:34.183251  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.193611  252004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:37:34.193726  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.203528  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.215302  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.225045  252004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:37:34.233944  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.244342  252004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.252329  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.263222  252004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:37:34.273315  252004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:37:34.281853  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:34.398185  252004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:37:34.534169  252004 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:37:34.534291  252004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:37:34.538246  252004 start.go:564] Will wait 60s for crictl version
	I1019 17:37:34.538363  252004 ssh_runner.go:195] Run: which crictl
	I1019 17:37:34.542091  252004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:37:34.567928  252004 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:37:34.568078  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.597233  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.633625  252004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:37:34.636469  252004 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:37:34.651969  252004 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:37:34.656388  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.669690  252004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 19 17:37:08 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:08.467175323Z" level=info msg="Removed container b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq/dashboard-metrics-scraper" id=9d1f15fb-2e0c-4014-84fa-9f62dbc320e4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 conmon[1131]: conmon 1407f79c02f56a6d1aba <ninfo>: container 1139 exited with status 1
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.461295049Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4e093d4b-8ea1-4bb9-89ce-8357a323b049 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.463036504Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=15b98b95-2227-45d0-92ff-60f294a34032 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.464323767Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bd548da3-203a-4337-918f-c2140e91c9a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.464734718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.483907185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.48433535Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02d7627bbdf3fb277d68c93fc669bde481135e2c757796c182accce2f702df0d/merged/etc/passwd: no such file or directory"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.484472411Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02d7627bbdf3fb277d68c93fc669bde481135e2c757796c182accce2f702df0d/merged/etc/group: no such file or directory"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.484918899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.513516059Z" level=info msg="Created container 7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520: kube-system/storage-provisioner/storage-provisioner" id=bd548da3-203a-4337-918f-c2140e91c9a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.514614912Z" level=info msg="Starting container: 7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520" id=b4ea7224-f3d0-45af-902c-57f33c094031 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.519287706Z" level=info msg="Started container" PID=1640 containerID=7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520 description=kube-system/storage-provisioner/storage-provisioner id=b4ea7224-f3d0-45af-902c-57f33c094031 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b1db43eeb577b64114b3cfdb46fefcb49ffd1faa35bd6a2a9060f01056dbcb
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.15283098Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156581004Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156615925Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156637554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167159615Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167197991Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167222426Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.182952581Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.182985558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.183013029Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.186972835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.187007314Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7885d58b89b98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   b5b1db43eeb57       storage-provisioner                                    kube-system
	7967fdc5cbdb0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   fbfbdfd588c18       dashboard-metrics-scraper-6ffb444bf9-wt7tq             kubernetes-dashboard
	d7e161aadc0e1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   cfc4e941983b5       kubernetes-dashboard-855c9754f9-vv2r4                  kubernetes-dashboard
	36abca155d76e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   95c7914169158       busybox                                                default
	30db141fa264b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   bcc23341146ea       coredns-66bc5c9577-vjhwx                               kube-system
	f619f61aa2774       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   b2303bdd4ecf8       kube-proxy-24xql                                       kube-system
	1407f79c02f56       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   b5b1db43eeb57       storage-provisioner                                    kube-system
	d063568e64248       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   b553c2fc40f43       kindnet-6xvl9                                          kube-system
	5cf150c07bffb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fb8085a254f50       kube-controller-manager-default-k8s-diff-port-370596   kube-system
	aca1c44b76285       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   0c6394c82ebbd       kube-apiserver-default-k8s-diff-port-370596            kube-system
	d4509ad64c1eb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   442e7e761a195       etcd-default-k8s-diff-port-370596                      kube-system
	195750df18b09       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c46d1cc4b9017       kube-scheduler-default-k8s-diff-port-370596            kube-system
	
	
	==> coredns [30db141fa264b9a802684de3150779c5736b374899eb2f97d8dba30adc88c7d3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54736 - 1776 "HINFO IN 5326392725884523671.8254381856711397482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025910573s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-370596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-370596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-370596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_35_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:35:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-370596
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:37:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-370596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e51b66e9-2b10-4f4c-b9ea-b7f9cb5ec8fe
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-vjhwx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-370596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-6xvl9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-370596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-370596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-24xql                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-370596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wt7tq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vv2r4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m21s              kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m28s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m28s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m28s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s              node-controller  Node default-k8s-diff-port-370596 event: Registered Node default-k8s-diff-port-370596 in Controller
	  Normal   NodeReady                101s               kubelet          Node default-k8s-diff-port-370596 status is now: NodeReady
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node default-k8s-diff-port-370596 event: Registered Node default-k8s-diff-port-370596 in Controller
	
	
	==> dmesg <==
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	[Oct19 17:37] overlayfs: idmapped layers are currently not supported
	[ +27.886872] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe] <==
	{"level":"warn","ts":"2025-10-19T17:36:37.701851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.729827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.756507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.783080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.812046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.860031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.895638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.964272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.974343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.004848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.032154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.050007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.077941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.095065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.126453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.150412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.169694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.198794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.224215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.242632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.311209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.346184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.364727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.395941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.443822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:37:36 up  1:20,  0 user,  load average: 5.46, 4.28, 3.66
	Linux default-k8s-diff-port-370596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d063568e642486afd257c23bc8b0d1fed9f45edb969d3248797e8792e9999f52] <==
	I1019 17:36:40.754690       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:36:40.755899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:36:40.756044       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:36:40.756057       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:36:40.756068       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:36:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:36:41.170831       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:36:41.170862       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:36:41.170885       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:36:41.171815       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:37:11.171910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:37:11.172063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:37:11.172097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:37:11.172176       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1019 17:37:12.671198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:37:12.671296       1 metrics.go:72] Registering metrics
	I1019 17:37:12.671456       1 controller.go:711] "Syncing nftables rules"
	I1019 17:37:21.152203       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:37:21.152252       1 main.go:301] handling current node
	I1019 17:37:31.158626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:37:31.158662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b] <==
	I1019 17:36:39.524291       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:36:39.539061       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:36:39.539162       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:36:39.549440       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:36:39.569182       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:36:39.707142       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:36:39.707171       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:36:39.707181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:36:39.707188       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:36:39.725696       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:36:39.725757       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:36:39.739608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:36:39.772178       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1019 17:36:39.828447       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:36:40.094240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:36:40.246004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:36:41.587605       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:36:41.811917       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:36:41.887865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:36:41.943587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:36:42.110943       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.5.4"}
	I1019 17:36:42.170869       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.54.23"}
	I1019 17:36:44.029691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:36:44.084105       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:36:44.329493       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c] <==
	I1019 17:36:43.900999       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:36:43.906682       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:36:43.906766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:36:43.910005       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:36:43.913292       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:36:43.916581       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:36:43.917020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:36:43.917079       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:36:43.917121       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:36:43.920828       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:36:43.922307       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:36:43.924379       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:36:43.924485       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:36:43.925591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:36:43.927973       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:36:43.928056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:36:43.928136       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-370596"
	I1019 17:36:43.928215       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:36:43.934645       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:36:43.934734       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:36:43.934771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:36:43.935989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:36:43.951971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:36:43.972949       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:36:43.974276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [f619f61aa27749c75021a3b43e6cb29266fc888091e65a14f87ce98a9c5c2415] <==
	I1019 17:36:41.280997       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:36:41.730235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:36:41.845557       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:36:41.845677       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:36:41.845792       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:36:42.011390       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:36:42.011543       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:36:42.104851       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:36:42.105683       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:36:42.105770       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:36:42.119484       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:36:42.119850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:36:42.120381       1 config.go:200] "Starting service config controller"
	I1019 17:36:42.120568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:36:42.121002       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:36:42.126622       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:36:42.158862       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:36:42.125344       1 config.go:309] "Starting node config controller"
	I1019 17:36:42.167580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:36:42.167642       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:36:42.224389       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:36:42.224544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd] <==
	I1019 17:36:33.746690       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:36:39.322149       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:36:39.322190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:36:39.322202       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:36:39.322209       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:36:39.536139       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:36:39.536174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:36:39.546993       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:36:39.547146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:36:39.547165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:36:39.547182       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:36:39.647908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.564792     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzq2p\" (UniqueName: \"kubernetes.io/projected/0ad03331-716a-44bb-b0f4-2bb2271a8d3a-kube-api-access-kzq2p\") pod \"dashboard-metrics-scraper-6ffb444bf9-wt7tq\" (UID: \"0ad03331-716a-44bb-b0f4-2bb2271a8d3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.665869     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsw5b\" (UniqueName: \"kubernetes.io/projected/1535a391-32cd-430f-911d-6f819ec0e20c-kube-api-access-xsw5b\") pod \"kubernetes-dashboard-855c9754f9-vv2r4\" (UID: \"1535a391-32cd-430f-911d-6f819ec0e20c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.665937     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1535a391-32cd-430f-911d-6f819ec0e20c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vv2r4\" (UID: \"1535a391-32cd-430f-911d-6f819ec0e20c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: W1019 17:36:44.842396     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1 WatchSource:0}: Error finding container fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1: Status 404 returned error can't find the container with id fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: W1019 17:36:44.881827     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192 WatchSource:0}: Error finding container cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192: Status 404 returned error can't find the container with id cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192
	Oct 19 17:36:49 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:49.247094     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:36:52 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:52.382919     777 scope.go:117] "RemoveContainer" containerID="10248de2281e0152636e6b2249cef2050714210ad68737634273fa37c112eb33"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:53.387332     777 scope.go:117] "RemoveContainer" containerID="10248de2281e0152636e6b2249cef2050714210ad68737634273fa37c112eb33"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:53.387612     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: E1019 17:36:53.387756     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:36:54 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:54.797249     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:36:54 default-k8s-diff-port-370596 kubelet[777]: E1019 17:36:54.797427     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.023548     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.445642     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.445924     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:08.446071     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.477575     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4" podStartSLOduration=10.949970789 podStartE2EDuration="24.477557953s" podCreationTimestamp="2025-10-19 17:36:44 +0000 UTC" firstStartedPulling="2025-10-19 17:36:44.900073113 +0000 UTC m=+14.149118374" lastFinishedPulling="2025-10-19 17:36:58.427660277 +0000 UTC m=+27.676705538" observedRunningTime="2025-10-19 17:36:59.437941618 +0000 UTC m=+28.686986895" watchObservedRunningTime="2025-10-19 17:37:08.477557953 +0000 UTC m=+37.726603222"
	Oct 19 17:37:11 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:11.460320     777 scope.go:117] "RemoveContainer" containerID="1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d"
	Oct 19 17:37:14 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:14.797057     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:14 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:14.798083     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:28 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:28.023435     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:28 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:28.023626     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d7e161aadc0e1cf960ad0ec63481467bb06b279f48656ae79ae0f9977a3fb9b9] <==
	2025/10/19 17:36:58 Using namespace: kubernetes-dashboard
	2025/10/19 17:36:58 Using in-cluster config to connect to apiserver
	2025/10/19 17:36:58 Using secret token for csrf signing
	2025/10/19 17:36:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:36:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:36:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:36:58 Generating JWE encryption key
	2025/10/19 17:36:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:36:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:36:59 Initializing JWE encryption key from synchronized object
	2025/10/19 17:36:59 Creating in-cluster Sidecar client
	2025/10/19 17:36:59 Serving insecurely on HTTP port: 9090
	2025/10/19 17:36:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:37:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:36:58 Starting overwatch
	
	
	==> storage-provisioner [1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d] <==
	I1019 17:36:41.433215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:37:11.435081       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520] <==
	I1019 17:37:11.556090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 17:37:11.586722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:37:11.586796       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:37:11.589288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:15.053554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:19.344962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:22.943153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:25.996357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.019745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.033800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:37:29.034141       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:37:29.034754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634!
	I1019 17:37:29.034227       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1c4cfdf-cdef-4239-ba06-3720ec0343a4", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634 became leader
	W1019 17:37:29.046657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.060151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:37:29.135709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634!
	W1019 17:37:31.063582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:31.073506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:33.077436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:33.084035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:35.086977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:35.092005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:37.097889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:37.119356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596: exit status 2 (588.572919ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-370596
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-370596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	        "Created": "2025-10-19T17:34:41.755702895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245546,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:36:23.345682306Z",
	            "FinishedAt": "2025-10-19T17:36:22.471881965Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47-json.log",
	        "Name": "/default-k8s-diff-port-370596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-370596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-370596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47",
	                "LowerDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/43ca4c04b73782b5e6d7f2052f3e36dafb2dd30bd6801027186155e4465cedcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-370596",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-370596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-370596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-370596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3112ffc5aaf2727c74f5f2a1d944a1aac02abc076e428800bcb16573c07878b5",
	            "SandboxKey": "/var/run/docker/netns/3112ffc5aaf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-370596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:53:4d:17:8f:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ae64488c7e77a883b5d278e8675d09c05353cf5ff587cc6ffef79a9a972f538",
	                    "EndpointID": "01aa326daa410857d85d7442e9898287ce6da1f50ca62f7d35cf59e32c7d1637",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-370596",
	                        "fe1a19329d9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
E1019 17:37:38.788840    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596: exit status 2 (557.743584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-370596 logs -n 25: (1.951506286s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-038781 image list --format=json                                                                                                                                                                                                    │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ pause   │ -p no-preload-038781 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │                     │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p no-preload-038781                                                                                                                                                                                                                          │ no-preload-038781            │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                                                                                                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ stop    │ -p newest-cni-633463 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-633463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ image   │ default-k8s-diff-port-370596 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p default-k8s-diff-port-370596 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:37:27.032239  252004 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:27.032438  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032469  252004 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:27.032495  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032763  252004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:27.033178  252004 out.go:368] Setting JSON to false
	I1019 17:37:27.034113  252004 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4795,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:37:27.034212  252004 start.go:143] virtualization:  
	I1019 17:37:27.039794  252004 out.go:179] * [newest-cni-633463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:37:27.043053  252004 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:37:27.043135  252004 notify.go:221] Checking for updates...
	I1019 17:37:27.049102  252004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:37:27.051961  252004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:27.054936  252004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:37:27.057816  252004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:37:27.060704  252004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:37:27.063995  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:27.064614  252004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:37:27.096144  252004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:37:27.096298  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.151308  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.14172198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.151422  252004 docker.go:319] overlay module found
	I1019 17:37:27.154606  252004 out.go:179] * Using the docker driver based on existing profile
	I1019 17:37:27.157314  252004 start.go:309] selected driver: docker
	I1019 17:37:27.157331  252004 start.go:930] validating driver "docker" against &{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.157428  252004 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:37:27.158143  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.221005  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.211547247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.221371  252004 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:27.221408  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:27.221460  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:27.221500  252004 start.go:353] cluster config:
	{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.224716  252004 out.go:179] * Starting "newest-cni-633463" primary control-plane node in "newest-cni-633463" cluster
	I1019 17:37:27.227526  252004 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:37:27.230600  252004 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:37:27.233300  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:27.233356  252004 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:37:27.233386  252004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:37:27.233392  252004 cache.go:59] Caching tarball of preloaded images
	I1019 17:37:27.233571  252004 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:37:27.233580  252004 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:37:27.233695  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.253189  252004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:37:27.253215  252004 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:37:27.253229  252004 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:37:27.253253  252004 start.go:360] acquireMachinesLock for newest-cni-633463: {Name:mk5bb6cb5b9b89fc5f7e65da679c1a55c56b4fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:37:27.253329  252004 start.go:364] duration metric: took 36.292µs to acquireMachinesLock for "newest-cni-633463"
	I1019 17:37:27.253353  252004 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:37:27.253363  252004 fix.go:54] fixHost starting: 
	I1019 17:37:27.253610  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.270778  252004 fix.go:112] recreateIfNeeded on newest-cni-633463: state=Stopped err=<nil>
	W1019 17:37:27.270810  252004 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:37:27.274260  252004 out.go:252] * Restarting existing docker container for "newest-cni-633463" ...
	I1019 17:37:27.274384  252004 cli_runner.go:164] Run: docker start newest-cni-633463
	I1019 17:37:27.550197  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.572138  252004 kic.go:430] container "newest-cni-633463" state is running.
	I1019 17:37:27.572522  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:27.598627  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.598852  252004 machine.go:94] provisionDockerMachine start ...
	I1019 17:37:27.598917  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:27.621212  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:27.621530  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:27.621539  252004 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:37:27.622140  252004 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35560->127.0.0.1:33128: read: connection reset by peer
	I1019 17:37:30.774430  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.774458  252004 ubuntu.go:182] provisioning hostname "newest-cni-633463"
	I1019 17:37:30.774529  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.793360  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.793655  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.793671  252004 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-633463 && echo "newest-cni-633463" | sudo tee /etc/hostname
	I1019 17:37:30.956546  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.956622  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.978519  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.978856  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.978879  252004 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-633463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-633463/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-633463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:37:31.143453  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:37:31.143482  252004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:37:31.143503  252004 ubuntu.go:190] setting up certificates
	I1019 17:37:31.143530  252004 provision.go:84] configureAuth start
	I1019 17:37:31.143603  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:31.162905  252004 provision.go:143] copyHostCerts
	I1019 17:37:31.162977  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:37:31.163001  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:37:31.163081  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:37:31.163199  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:37:31.163210  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:37:31.163237  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:37:31.163303  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:37:31.163313  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:37:31.163341  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:37:31.163402  252004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-633463 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-633463]
	I1019 17:37:32.238364  252004 provision.go:177] copyRemoteCerts
	I1019 17:37:32.238433  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:37:32.238477  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.259782  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.363454  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:37:32.382228  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:37:32.402346  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:37:32.434653  252004 provision.go:87] duration metric: took 1.291102282s to configureAuth
	I1019 17:37:32.434677  252004 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:37:32.434877  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:32.434994  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.460163  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:32.460471  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:32.460484  252004 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:37:32.812582  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:37:32.812601  252004 machine.go:97] duration metric: took 5.213739158s to provisionDockerMachine
	I1019 17:37:32.812612  252004 start.go:293] postStartSetup for "newest-cni-633463" (driver="docker")
	I1019 17:37:32.812623  252004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:37:32.812687  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:37:32.812731  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.845647  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.958253  252004 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:37:32.962641  252004 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:37:32.962669  252004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:37:32.962681  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:37:32.962741  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:37:32.962825  252004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:37:32.962929  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:37:32.982498  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:33.019033  252004 start.go:296] duration metric: took 206.405729ms for postStartSetup
	I1019 17:37:33.019119  252004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:37:33.019182  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.060276  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.167952  252004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:37:33.176969  252004 fix.go:56] duration metric: took 5.923599942s for fixHost
	I1019 17:37:33.176995  252004 start.go:83] releasing machines lock for "newest-cni-633463", held for 5.923653801s
	I1019 17:37:33.177082  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:33.203375  252004 ssh_runner.go:195] Run: cat /version.json
	I1019 17:37:33.203411  252004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:37:33.203489  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.203426  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.248837  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.249412  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.469299  252004 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:33.477000  252004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:37:33.515118  252004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:37:33.520482  252004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:37:33.520556  252004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:37:33.529508  252004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:37:33.529534  252004 start.go:496] detecting cgroup driver to use...
	I1019 17:37:33.529596  252004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:37:33.529659  252004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:37:33.550450  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:37:33.567224  252004 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:37:33.567330  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:37:33.587367  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:37:33.603412  252004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:37:33.721296  252004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:37:33.835246  252004 docker.go:234] disabling docker service ...
	I1019 17:37:33.835350  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:37:33.850207  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:37:33.864410  252004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:37:33.985866  252004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:37:34.153123  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:37:34.167027  252004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:37:34.183139  252004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:37:34.183251  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.193611  252004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:37:34.193726  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.203528  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.215302  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.225045  252004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:37:34.233944  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.244342  252004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.252329  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.263222  252004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:37:34.273315  252004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:37:34.281853  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:34.398185  252004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:37:34.534169  252004 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:37:34.534291  252004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:37:34.538246  252004 start.go:564] Will wait 60s for crictl version
	I1019 17:37:34.538363  252004 ssh_runner.go:195] Run: which crictl
	I1019 17:37:34.542091  252004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:37:34.567928  252004 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:37:34.568078  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.597233  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.633625  252004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:37:34.636469  252004 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:37:34.651969  252004 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:37:34.656388  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.669690  252004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 17:37:34.672465  252004 kubeadm.go:884] updating cluster {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:37:34.672612  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:34.672681  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.749018  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.749089  252004 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:37:34.749162  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.794191  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.794264  252004 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:37:34.794288  252004 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:37:34.794413  252004 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-633463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:37:34.794518  252004 ssh_runner.go:195] Run: crio config
	I1019 17:37:34.860893  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:34.860958  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:34.860994  252004 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:37:34.861034  252004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-633463 NodeName:newest-cni-633463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:37:34.861183  252004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-633463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:37:34.861285  252004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:37:34.873668  252004 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:37:34.873773  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:37:34.884115  252004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:37:34.899167  252004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:37:34.913418  252004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 17:37:34.928410  252004 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:37:34.932497  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.942938  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:35.118309  252004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:35.151281  252004 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463 for IP: 192.168.85.2
	I1019 17:37:35.151298  252004 certs.go:195] generating shared ca certs ...
	I1019 17:37:35.151314  252004 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:35.151434  252004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:37:35.151469  252004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:37:35.151476  252004 certs.go:257] generating profile certs ...
	I1019 17:37:35.151552  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key
	I1019 17:37:35.151601  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287
	I1019 17:37:35.151636  252004 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key
	I1019 17:37:35.151753  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:37:35.151783  252004 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:37:35.151792  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:37:35.151815  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:37:35.151839  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:37:35.151860  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:37:35.151900  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:35.152504  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:37:35.200597  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:37:35.230333  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:37:35.258952  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:37:35.280905  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:37:35.335355  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:37:35.396523  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:37:35.433499  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:37:35.470268  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:37:35.489518  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:37:35.528313  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:37:35.559597  252004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:37:35.576016  252004 ssh_runner.go:195] Run: openssl version
	I1019 17:37:35.583318  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:37:35.593494  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600654  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600720  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.659469  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:37:35.671808  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:37:35.681887  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686331  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686396  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.744102  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:37:35.753718  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:37:35.763391  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767365  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767433  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.817437  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:37:35.825631  252004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:37:35.829562  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:37:35.895384  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:37:35.974594  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:37:36.095726  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:37:36.221650  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:37:36.343301  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:37:36.432978  252004 kubeadm.go:401] StartCluster: {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:36.433073  252004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:37:36.433140  252004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:37:36.498472  252004 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:36.498496  252004 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:36.498502  252004 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:36.498506  252004 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:36.498509  252004 cri.go:89] found id: ""
	I1019 17:37:36.498581  252004 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:37:36.519309  252004 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:36.519401  252004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:37:36.535125  252004 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:37:36.535154  252004 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:37:36.535202  252004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:37:36.549734  252004 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:37:36.550325  252004 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-633463" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.550710  252004 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-633463" cluster setting kubeconfig missing "newest-cni-633463" context setting]
	I1019 17:37:36.551159  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.552771  252004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:37:36.569733  252004 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:37:36.569768  252004 kubeadm.go:602] duration metric: took 34.607778ms to restartPrimaryControlPlane
	I1019 17:37:36.569777  252004 kubeadm.go:403] duration metric: took 136.80791ms to StartCluster
	I1019 17:37:36.569791  252004 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.569851  252004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.570800  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.571001  252004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:37:36.571349  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:36.571375  252004 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:37:36.571528  252004 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-633463"
	I1019 17:37:36.571540  252004 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-633463"
	W1019 17:37:36.571553  252004 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:37:36.571572  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572383  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.572532  252004 addons.go:70] Setting dashboard=true in profile "newest-cni-633463"
	I1019 17:37:36.572549  252004 addons.go:239] Setting addon dashboard=true in "newest-cni-633463"
	W1019 17:37:36.572556  252004 addons.go:248] addon dashboard should already be in state true
	I1019 17:37:36.572583  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572955  252004 addons.go:70] Setting default-storageclass=true in profile "newest-cni-633463"
	I1019 17:37:36.572972  252004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-633463"
	I1019 17:37:36.573200  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.573322  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.576375  252004 out.go:179] * Verifying Kubernetes components...
	I1019 17:37:36.579575  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:36.622594  252004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:37:36.627897  252004 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:37:36.629343  252004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:36.629367  252004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:37:36.629433  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.630688  252004 addons.go:239] Setting addon default-storageclass=true in "newest-cni-633463"
	W1019 17:37:36.630710  252004 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:37:36.630735  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.631149  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.640030  252004 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:37:36.644156  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:37:36.644188  252004 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:37:36.644280  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.676441  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.689281  252004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:36.689318  252004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:37:36.689378  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.711052  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.743001  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.939083  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:37.024609  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:37:37.024646  252004 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	
	
	==> CRI-O <==
	Oct 19 17:37:08 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:08.467175323Z" level=info msg="Removed container b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq/dashboard-metrics-scraper" id=9d1f15fb-2e0c-4014-84fa-9f62dbc320e4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 conmon[1131]: conmon 1407f79c02f56a6d1aba <ninfo>: container 1139 exited with status 1
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.461295049Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4e093d4b-8ea1-4bb9-89ce-8357a323b049 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.463036504Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=15b98b95-2227-45d0-92ff-60f294a34032 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.464323767Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bd548da3-203a-4337-918f-c2140e91c9a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.464734718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.483907185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.48433535Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02d7627bbdf3fb277d68c93fc669bde481135e2c757796c182accce2f702df0d/merged/etc/passwd: no such file or directory"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.484472411Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02d7627bbdf3fb277d68c93fc669bde481135e2c757796c182accce2f702df0d/merged/etc/group: no such file or directory"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.484918899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.513516059Z" level=info msg="Created container 7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520: kube-system/storage-provisioner/storage-provisioner" id=bd548da3-203a-4337-918f-c2140e91c9a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.514614912Z" level=info msg="Starting container: 7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520" id=b4ea7224-f3d0-45af-902c-57f33c094031 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:11 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:11.519287706Z" level=info msg="Started container" PID=1640 containerID=7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520 description=kube-system/storage-provisioner/storage-provisioner id=b4ea7224-f3d0-45af-902c-57f33c094031 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b1db43eeb577b64114b3cfdb46fefcb49ffd1faa35bd6a2a9060f01056dbcb
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.15283098Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156581004Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156615925Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.156637554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167159615Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167197991Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.167222426Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.182952581Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.182985558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.183013029Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.186972835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 17:37:21 default-k8s-diff-port-370596 crio[649]: time="2025-10-19T17:37:21.187007314Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7885d58b89b98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   b5b1db43eeb57       storage-provisioner                                    kube-system
	7967fdc5cbdb0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   fbfbdfd588c18       dashboard-metrics-scraper-6ffb444bf9-wt7tq             kubernetes-dashboard
	d7e161aadc0e1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   cfc4e941983b5       kubernetes-dashboard-855c9754f9-vv2r4                  kubernetes-dashboard
	36abca155d76e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   95c7914169158       busybox                                                default
	30db141fa264b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   bcc23341146ea       coredns-66bc5c9577-vjhwx                               kube-system
	f619f61aa2774       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   b2303bdd4ecf8       kube-proxy-24xql                                       kube-system
	1407f79c02f56       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   b5b1db43eeb57       storage-provisioner                                    kube-system
	d063568e64248       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   b553c2fc40f43       kindnet-6xvl9                                          kube-system
	5cf150c07bffb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fb8085a254f50       kube-controller-manager-default-k8s-diff-port-370596   kube-system
	aca1c44b76285       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   0c6394c82ebbd       kube-apiserver-default-k8s-diff-port-370596            kube-system
	d4509ad64c1eb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   442e7e761a195       etcd-default-k8s-diff-port-370596                      kube-system
	195750df18b09       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c46d1cc4b9017       kube-scheduler-default-k8s-diff-port-370596            kube-system
	
	
	==> coredns [30db141fa264b9a802684de3150779c5736b374899eb2f97d8dba30adc88c7d3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54736 - 1776 "HINFO IN 5326392725884523671.8254381856711397482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025910573s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-370596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-370596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=default-k8s-diff-port-370596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_35_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:35:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-370596
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:37:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:37:10 +0000   Sun, 19 Oct 2025 17:35:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-370596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e51b66e9-2b10-4f4c-b9ea-b7f9cb5ec8fe
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-vjhwx                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-default-k8s-diff-port-370596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-6xvl9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-370596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-370596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-24xql                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-370596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wt7tq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vv2r4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m24s              kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m32s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m32s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s              kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m32s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m27s              node-controller  Node default-k8s-diff-port-370596 event: Registered Node default-k8s-diff-port-370596 in Controller
	  Normal   NodeReady                105s               kubelet          Node default-k8s-diff-port-370596 status is now: NodeReady
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-370596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-370596 event: Registered Node default-k8s-diff-port-370596 in Controller
	
	
	==> dmesg <==
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	[Oct19 17:37] overlayfs: idmapped layers are currently not supported
	[ +27.886872] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4509ad64c1eb11af3d453484caa9c46a9674da90e577b46cf1ad436550a9bfe] <==
	{"level":"warn","ts":"2025-10-19T17:36:37.701851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.729827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.756507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.783080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.812046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.860031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.895638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.964272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:37.974343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.004848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.032154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.050007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.077941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.095065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.126453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.150412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.169694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.198794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.224215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.242632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.311209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.346184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.364727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.395941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:36:38.443822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:37:40 up  1:20,  0 user,  load average: 5.46, 4.28, 3.66
	Linux default-k8s-diff-port-370596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d063568e642486afd257c23bc8b0d1fed9f45edb969d3248797e8792e9999f52] <==
	I1019 17:36:40.754690       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:36:40.755899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 17:36:40.756044       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:36:40.756057       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:36:40.756068       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:36:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:36:41.170831       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:36:41.170862       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:36:41.170885       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:36:41.171815       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 17:37:11.171910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 17:37:11.172063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 17:37:11.172097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 17:37:11.172176       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1019 17:37:12.671198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 17:37:12.671296       1 metrics.go:72] Registering metrics
	I1019 17:37:12.671456       1 controller.go:711] "Syncing nftables rules"
	I1019 17:37:21.152203       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:37:21.152252       1 main.go:301] handling current node
	I1019 17:37:31.158626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 17:37:31.158662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aca1c44b76285c09db2393734432a8efea9ed5daf6067f6faf51a17b63af121b] <==
	I1019 17:36:39.524291       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 17:36:39.539061       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:36:39.539162       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:36:39.549440       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:36:39.569182       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:36:39.707142       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:36:39.707171       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:36:39.707181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:36:39.707188       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:36:39.725696       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 17:36:39.725757       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 17:36:39.739608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:36:39.772178       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1019 17:36:39.828447       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:36:40.094240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:36:40.246004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:36:41.587605       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:36:41.811917       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:36:41.887865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:36:41.943587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:36:42.110943       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.5.4"}
	I1019 17:36:42.170869       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.54.23"}
	I1019 17:36:44.029691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:36:44.084105       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:36:44.329493       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5cf150c07bffb7c7dc4c126c49627f73d20284751e58cc8c02bde67d1ed68c3c] <==
	I1019 17:36:43.900999       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:36:43.906682       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:36:43.906766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:36:43.910005       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:36:43.913292       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:36:43.916581       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:36:43.917020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:36:43.917079       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:36:43.917121       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:36:43.920828       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:36:43.922307       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:36:43.924379       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:36:43.924485       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:36:43.925591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:36:43.927973       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:36:43.928056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:36:43.928136       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-370596"
	I1019 17:36:43.928215       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:36:43.934645       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 17:36:43.934734       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:36:43.934771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:36:43.935989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:36:43.951971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:36:43.972949       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:36:43.974276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [f619f61aa27749c75021a3b43e6cb29266fc888091e65a14f87ce98a9c5c2415] <==
	I1019 17:36:41.280997       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:36:41.730235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:36:41.845557       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:36:41.845677       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 17:36:41.845792       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:36:42.011390       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:36:42.011543       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:36:42.104851       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:36:42.105683       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:36:42.105770       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:36:42.119484       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:36:42.119850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:36:42.120381       1 config.go:200] "Starting service config controller"
	I1019 17:36:42.120568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:36:42.121002       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:36:42.126622       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:36:42.158862       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:36:42.125344       1 config.go:309] "Starting node config controller"
	I1019 17:36:42.167580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:36:42.167642       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:36:42.224389       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:36:42.224544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [195750df18b095565b5aa6d68d380e0477dcd39d96118413146e6f3cc1d5a7bd] <==
	I1019 17:36:33.746690       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:36:39.322149       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:36:39.322190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:36:39.322202       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:36:39.322209       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:36:39.536139       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:36:39.536174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:36:39.546993       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:36:39.547146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:36:39.547165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:36:39.547182       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:36:39.647908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.564792     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzq2p\" (UniqueName: \"kubernetes.io/projected/0ad03331-716a-44bb-b0f4-2bb2271a8d3a-kube-api-access-kzq2p\") pod \"dashboard-metrics-scraper-6ffb444bf9-wt7tq\" (UID: \"0ad03331-716a-44bb-b0f4-2bb2271a8d3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.665869     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsw5b\" (UniqueName: \"kubernetes.io/projected/1535a391-32cd-430f-911d-6f819ec0e20c-kube-api-access-xsw5b\") pod \"kubernetes-dashboard-855c9754f9-vv2r4\" (UID: \"1535a391-32cd-430f-911d-6f819ec0e20c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:44.665937     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1535a391-32cd-430f-911d-6f819ec0e20c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vv2r4\" (UID: \"1535a391-32cd-430f-911d-6f819ec0e20c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4"
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: W1019 17:36:44.842396     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1 WatchSource:0}: Error finding container fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1: Status 404 returned error can't find the container with id fbfbdfd588c183eb392d6dd24bbde9759235a98c2ef5fa30d91d0c9f09eee3e1
	Oct 19 17:36:44 default-k8s-diff-port-370596 kubelet[777]: W1019 17:36:44.881827     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe1a19329d9f051682244482232a7379fb6246fed3910ec8da0efc085c333a47/crio-cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192 WatchSource:0}: Error finding container cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192: Status 404 returned error can't find the container with id cfc4e941983b54a4be375d4a3bf9d734a955f70bd2000f930e645730cd2fb192
	Oct 19 17:36:49 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:49.247094     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:36:52 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:52.382919     777 scope.go:117] "RemoveContainer" containerID="10248de2281e0152636e6b2249cef2050714210ad68737634273fa37c112eb33"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:53.387332     777 scope.go:117] "RemoveContainer" containerID="10248de2281e0152636e6b2249cef2050714210ad68737634273fa37c112eb33"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:53.387612     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:36:53 default-k8s-diff-port-370596 kubelet[777]: E1019 17:36:53.387756     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:36:54 default-k8s-diff-port-370596 kubelet[777]: I1019 17:36:54.797249     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:36:54 default-k8s-diff-port-370596 kubelet[777]: E1019 17:36:54.797427     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.023548     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.445642     777 scope.go:117] "RemoveContainer" containerID="b223672675df5db7531c6c8ead7538640959558536b923c5847c070a3d0cb10a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.445924     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:08.446071     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:08 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:08.477575     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vv2r4" podStartSLOduration=10.949970789 podStartE2EDuration="24.477557953s" podCreationTimestamp="2025-10-19 17:36:44 +0000 UTC" firstStartedPulling="2025-10-19 17:36:44.900073113 +0000 UTC m=+14.149118374" lastFinishedPulling="2025-10-19 17:36:58.427660277 +0000 UTC m=+27.676705538" observedRunningTime="2025-10-19 17:36:59.437941618 +0000 UTC m=+28.686986895" watchObservedRunningTime="2025-10-19 17:37:08.477557953 +0000 UTC m=+37.726603222"
	Oct 19 17:37:11 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:11.460320     777 scope.go:117] "RemoveContainer" containerID="1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d"
	Oct 19 17:37:14 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:14.797057     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:14 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:14.798083     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:28 default-k8s-diff-port-370596 kubelet[777]: I1019 17:37:28.023435     777 scope.go:117] "RemoveContainer" containerID="7967fdc5cbdb0732a243f1cd73c6656a1407f9fd485d38c6c22b6837a9274c70"
	Oct 19 17:37:28 default-k8s-diff-port-370596 kubelet[777]: E1019 17:37:28.023626     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wt7tq_kubernetes-dashboard(0ad03331-716a-44bb-b0f4-2bb2271a8d3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wt7tq" podUID="0ad03331-716a-44bb-b0f4-2bb2271a8d3a"
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:37:33 default-k8s-diff-port-370596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d7e161aadc0e1cf960ad0ec63481467bb06b279f48656ae79ae0f9977a3fb9b9] <==
	2025/10/19 17:36:58 Using namespace: kubernetes-dashboard
	2025/10/19 17:36:58 Using in-cluster config to connect to apiserver
	2025/10/19 17:36:58 Using secret token for csrf signing
	2025/10/19 17:36:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 17:36:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 17:36:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 17:36:58 Generating JWE encryption key
	2025/10/19 17:36:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 17:36:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 17:36:59 Initializing JWE encryption key from synchronized object
	2025/10/19 17:36:59 Creating in-cluster Sidecar client
	2025/10/19 17:36:59 Serving insecurely on HTTP port: 9090
	2025/10/19 17:36:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:37:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 17:36:58 Starting overwatch
	
	
	==> storage-provisioner [1407f79c02f56a6d1abaf7fcd2e5b44442d48282c70283b2b7f76b4a46ec759d] <==
	I1019 17:36:41.433215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 17:37:11.435081       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7885d58b89b98413fa7ab4ff2a01f891ab049082b184803ca7c65a6d8e19e520] <==
	I1019 17:37:11.586722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 17:37:11.586796       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 17:37:11.589288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:15.053554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:19.344962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:22.943153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:25.996357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.019745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.033800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:37:29.034141       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 17:37:29.034754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634!
	I1019 17:37:29.034227       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1c4cfdf-cdef-4239-ba06-3720ec0343a4", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634 became leader
	W1019 17:37:29.046657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:29.060151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 17:37:29.135709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-370596_66415c7f-c40b-446c-b0f1-0cc2b5475634!
	W1019 17:37:31.063582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:31.073506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:33.077436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:33.084035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:35.086977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:35.092005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:37.097889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:37.119356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:39.126861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 17:37:39.138886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596: exit status 2 (587.513574ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-633463 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-633463 --alsologtostderr -v=1: exit status 80 (1.823113658s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-633463 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:37:46.096593  255133 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:46.096736  255133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:46.096741  255133 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:46.096745  255133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:46.097083  255133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:46.097410  255133 out.go:368] Setting JSON to false
	I1019 17:37:46.097434  255133 mustload.go:66] Loading cluster: newest-cni-633463
	I1019 17:37:46.102299  255133 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:46.102954  255133 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:46.137357  255133 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:46.137781  255133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:46.259625  255133 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:46.248015359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:46.260400  255133 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-633463 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 17:37:46.264347  255133 out.go:179] * Pausing node newest-cni-633463 ... 
	I1019 17:37:46.267496  255133 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:46.267890  255133 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:46.267949  255133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:46.299271  255133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:46.405675  255133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:46.418782  255133 pause.go:52] kubelet running: true
	I1019 17:37:46.418847  255133 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:46.650360  255133 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:46.650451  255133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:46.742808  255133 cri.go:89] found id: "4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200"
	I1019 17:37:46.742885  255133 cri.go:89] found id: "edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3"
	I1019 17:37:46.742897  255133 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:46.742902  255133 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:46.742906  255133 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:46.742910  255133 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:46.742922  255133 cri.go:89] found id: ""
	I1019 17:37:46.742978  255133 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:46.754377  255133 retry.go:31] will retry after 280.61706ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:46Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:47.035842  255133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:47.048494  255133 pause.go:52] kubelet running: false
	I1019 17:37:47.048612  255133 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:47.191976  255133 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:47.192096  255133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:47.259772  255133 cri.go:89] found id: "4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200"
	I1019 17:37:47.259795  255133 cri.go:89] found id: "edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3"
	I1019 17:37:47.259801  255133 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:47.259805  255133 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:47.259808  255133 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:47.259826  255133 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:47.259831  255133 cri.go:89] found id: ""
	I1019 17:37:47.259918  255133 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:47.271225  255133 retry.go:31] will retry after 295.681162ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:47Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:47.567921  255133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:37:47.583603  255133 pause.go:52] kubelet running: false
	I1019 17:37:47.583723  255133 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 17:37:47.722947  255133 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 17:37:47.723020  255133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 17:37:47.799619  255133 cri.go:89] found id: "4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200"
	I1019 17:37:47.799646  255133 cri.go:89] found id: "edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3"
	I1019 17:37:47.799657  255133 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:47.799661  255133 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:47.799664  255133 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:47.799668  255133 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:47.799672  255133 cri.go:89] found id: ""
	I1019 17:37:47.799748  255133 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 17:37:47.814505  255133 out.go:203] 
	W1019 17:37:47.817355  255133 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 17:37:47.817380  255133 out.go:285] * 
	* 
	W1019 17:37:47.822335  255133 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:37:47.825238  255133 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-633463 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-633463
helpers_test.go:243: (dbg) docker inspect newest-cni-633463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	        "Created": "2025-10-19T17:36:48.723991016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252130,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:37:27.306815158Z",
	            "FinishedAt": "2025-10-19T17:37:26.436245868Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hosts",
	        "LogPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea-json.log",
	        "Name": "/newest-cni-633463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-633463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-633463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	                "LowerDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-633463",
	                "Source": "/var/lib/docker/volumes/newest-cni-633463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-633463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-633463",
	                "name.minikube.sigs.k8s.io": "newest-cni-633463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de8d79efe2ef4473549e8abe471d29c8da6cbf1d8a8493ff38092cfe6d3b83fd",
	            "SandboxKey": "/var/run/docker/netns/de8d79efe2ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-633463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:07:6b:1d:e9:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903462b71a8c585e1f826b3d07accd39a29c6c1814ddb40704a08f8813291f55",
	                    "EndpointID": "ef934d3ac8f3a53ca6755c491742e9a663f4566eca8bb2dc075b591e250f9e5f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-633463",
	                        "dc48a98a25fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463: exit status 2 (348.063589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25: (1.062444211s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                                                                                                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ stop    │ -p newest-cni-633463 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-633463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ default-k8s-diff-port-370596 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p default-k8s-diff-port-370596 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-370596                                                                                                                                                                                                               │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ delete  │ -p default-k8s-diff-port-370596                                                                                                                                                                                                               │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ newest-cni-633463 image list --format=json                                                                                                                                                                                                    │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p newest-cni-633463 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:37:27.032239  252004 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:27.032438  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032469  252004 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:27.032495  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032763  252004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:27.033178  252004 out.go:368] Setting JSON to false
	I1019 17:37:27.034113  252004 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4795,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:37:27.034212  252004 start.go:143] virtualization:  
	I1019 17:37:27.039794  252004 out.go:179] * [newest-cni-633463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:37:27.043053  252004 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:37:27.043135  252004 notify.go:221] Checking for updates...
	I1019 17:37:27.049102  252004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:37:27.051961  252004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:27.054936  252004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:37:27.057816  252004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:37:27.060704  252004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:37:27.063995  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:27.064614  252004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:37:27.096144  252004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:37:27.096298  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.151308  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.14172198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.151422  252004 docker.go:319] overlay module found
	I1019 17:37:27.154606  252004 out.go:179] * Using the docker driver based on existing profile
	I1019 17:37:27.157314  252004 start.go:309] selected driver: docker
	I1019 17:37:27.157331  252004 start.go:930] validating driver "docker" against &{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.157428  252004 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:37:27.158143  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.221005  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.211547247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.221371  252004 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:27.221408  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:27.221460  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:27.221500  252004 start.go:353] cluster config:
	{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.224716  252004 out.go:179] * Starting "newest-cni-633463" primary control-plane node in "newest-cni-633463" cluster
	I1019 17:37:27.227526  252004 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:37:27.230600  252004 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:37:27.233300  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:27.233356  252004 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:37:27.233386  252004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:37:27.233392  252004 cache.go:59] Caching tarball of preloaded images
	I1019 17:37:27.233571  252004 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:37:27.233580  252004 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:37:27.233695  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.253189  252004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:37:27.253215  252004 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:37:27.253229  252004 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:37:27.253253  252004 start.go:360] acquireMachinesLock for newest-cni-633463: {Name:mk5bb6cb5b9b89fc5f7e65da679c1a55c56b4fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:37:27.253329  252004 start.go:364] duration metric: took 36.292µs to acquireMachinesLock for "newest-cni-633463"
	I1019 17:37:27.253353  252004 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:37:27.253363  252004 fix.go:54] fixHost starting: 
	I1019 17:37:27.253610  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.270778  252004 fix.go:112] recreateIfNeeded on newest-cni-633463: state=Stopped err=<nil>
	W1019 17:37:27.270810  252004 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:37:27.274260  252004 out.go:252] * Restarting existing docker container for "newest-cni-633463" ...
	I1019 17:37:27.274384  252004 cli_runner.go:164] Run: docker start newest-cni-633463
	I1019 17:37:27.550197  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.572138  252004 kic.go:430] container "newest-cni-633463" state is running.
	I1019 17:37:27.572522  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:27.598627  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.598852  252004 machine.go:94] provisionDockerMachine start ...
	I1019 17:37:27.598917  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:27.621212  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:27.621530  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:27.621539  252004 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:37:27.622140  252004 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35560->127.0.0.1:33128: read: connection reset by peer
	I1019 17:37:30.774430  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.774458  252004 ubuntu.go:182] provisioning hostname "newest-cni-633463"
	I1019 17:37:30.774529  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.793360  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.793655  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.793671  252004 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-633463 && echo "newest-cni-633463" | sudo tee /etc/hostname
	I1019 17:37:30.956546  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.956622  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.978519  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.978856  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.978879  252004 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-633463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-633463/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-633463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:37:31.143453  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:37:31.143482  252004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:37:31.143503  252004 ubuntu.go:190] setting up certificates
	I1019 17:37:31.143530  252004 provision.go:84] configureAuth start
	I1019 17:37:31.143603  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:31.162905  252004 provision.go:143] copyHostCerts
	I1019 17:37:31.162977  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:37:31.163001  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:37:31.163081  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:37:31.163199  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:37:31.163210  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:37:31.163237  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:37:31.163303  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:37:31.163313  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:37:31.163341  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:37:31.163402  252004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-633463 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-633463]
	I1019 17:37:32.238364  252004 provision.go:177] copyRemoteCerts
	I1019 17:37:32.238433  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:37:32.238477  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.259782  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.363454  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:37:32.382228  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:37:32.402346  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:37:32.434653  252004 provision.go:87] duration metric: took 1.291102282s to configureAuth
	I1019 17:37:32.434677  252004 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:37:32.434877  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:32.434994  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.460163  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:32.460471  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:32.460484  252004 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:37:32.812582  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:37:32.812601  252004 machine.go:97] duration metric: took 5.213739158s to provisionDockerMachine
	I1019 17:37:32.812612  252004 start.go:293] postStartSetup for "newest-cni-633463" (driver="docker")
	I1019 17:37:32.812623  252004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:37:32.812687  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:37:32.812731  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.845647  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.958253  252004 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:37:32.962641  252004 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:37:32.962669  252004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:37:32.962681  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:37:32.962741  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:37:32.962825  252004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:37:32.962929  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:37:32.982498  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:33.019033  252004 start.go:296] duration metric: took 206.405729ms for postStartSetup
	I1019 17:37:33.019119  252004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:37:33.019182  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.060276  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.167952  252004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:37:33.176969  252004 fix.go:56] duration metric: took 5.923599942s for fixHost
	I1019 17:37:33.176995  252004 start.go:83] releasing machines lock for "newest-cni-633463", held for 5.923653801s
	I1019 17:37:33.177082  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:33.203375  252004 ssh_runner.go:195] Run: cat /version.json
	I1019 17:37:33.203411  252004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:37:33.203489  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.203426  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.248837  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.249412  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.469299  252004 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:33.477000  252004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:37:33.515118  252004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:37:33.520482  252004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:37:33.520556  252004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:37:33.529508  252004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:37:33.529534  252004 start.go:496] detecting cgroup driver to use...
	I1019 17:37:33.529596  252004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:37:33.529659  252004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:37:33.550450  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:37:33.567224  252004 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:37:33.567330  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:37:33.587367  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:37:33.603412  252004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:37:33.721296  252004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:37:33.835246  252004 docker.go:234] disabling docker service ...
	I1019 17:37:33.835350  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:37:33.850207  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:37:33.864410  252004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:37:33.985866  252004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:37:34.153123  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:37:34.167027  252004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:37:34.183139  252004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:37:34.183251  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.193611  252004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:37:34.193726  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.203528  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.215302  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.225045  252004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:37:34.233944  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.244342  252004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.252329  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.263222  252004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:37:34.273315  252004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:37:34.281853  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:34.398185  252004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:37:34.534169  252004 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:37:34.534291  252004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:37:34.538246  252004 start.go:564] Will wait 60s for crictl version
	I1019 17:37:34.538363  252004 ssh_runner.go:195] Run: which crictl
	I1019 17:37:34.542091  252004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:37:34.567928  252004 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:37:34.568078  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.597233  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.633625  252004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:37:34.636469  252004 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:37:34.651969  252004 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:37:34.656388  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.669690  252004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 17:37:34.672465  252004 kubeadm.go:884] updating cluster {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:37:34.672612  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:34.672681  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.749018  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.749089  252004 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:37:34.749162  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.794191  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.794264  252004 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:37:34.794288  252004 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:37:34.794413  252004 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-633463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:37:34.794518  252004 ssh_runner.go:195] Run: crio config
	I1019 17:37:34.860893  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:34.860958  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:34.860994  252004 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:37:34.861034  252004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-633463 NodeName:newest-cni-633463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:37:34.861183  252004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-633463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:37:34.861285  252004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:37:34.873668  252004 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:37:34.873773  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:37:34.884115  252004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:37:34.899167  252004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:37:34.913418  252004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 17:37:34.928410  252004 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:37:34.932497  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.942938  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:35.118309  252004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:35.151281  252004 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463 for IP: 192.168.85.2
	I1019 17:37:35.151298  252004 certs.go:195] generating shared ca certs ...
	I1019 17:37:35.151314  252004 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:35.151434  252004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:37:35.151469  252004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:37:35.151476  252004 certs.go:257] generating profile certs ...
	I1019 17:37:35.151552  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key
	I1019 17:37:35.151601  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287
	I1019 17:37:35.151636  252004 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key
	I1019 17:37:35.151753  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:37:35.151783  252004 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:37:35.151792  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:37:35.151815  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:37:35.151839  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:37:35.151860  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:37:35.151900  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:35.152504  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:37:35.200597  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:37:35.230333  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:37:35.258952  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:37:35.280905  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:37:35.335355  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:37:35.396523  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:37:35.433499  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:37:35.470268  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:37:35.489518  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:37:35.528313  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:37:35.559597  252004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:37:35.576016  252004 ssh_runner.go:195] Run: openssl version
	I1019 17:37:35.583318  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:37:35.593494  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600654  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600720  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.659469  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:37:35.671808  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:37:35.681887  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686331  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686396  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.744102  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:37:35.753718  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:37:35.763391  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767365  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767433  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.817437  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:37:35.825631  252004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:37:35.829562  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:37:35.895384  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:37:35.974594  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:37:36.095726  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:37:36.221650  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:37:36.343301  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:37:36.432978  252004 kubeadm.go:401] StartCluster: {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:36.433073  252004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:37:36.433140  252004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:37:36.498472  252004 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:36.498496  252004 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:36.498502  252004 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:36.498506  252004 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:36.498509  252004 cri.go:89] found id: ""
	I1019 17:37:36.498581  252004 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:37:36.519309  252004 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:36.519401  252004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:37:36.535125  252004 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:37:36.535154  252004 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:37:36.535202  252004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:37:36.549734  252004 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:37:36.550325  252004 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-633463" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.550710  252004 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-633463" cluster setting kubeconfig missing "newest-cni-633463" context setting]
	I1019 17:37:36.551159  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.552771  252004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:37:36.569733  252004 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:37:36.569768  252004 kubeadm.go:602] duration metric: took 34.607778ms to restartPrimaryControlPlane
	I1019 17:37:36.569777  252004 kubeadm.go:403] duration metric: took 136.80791ms to StartCluster
	I1019 17:37:36.569791  252004 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.569851  252004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.570800  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.571001  252004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:37:36.571349  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:36.571375  252004 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:37:36.571528  252004 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-633463"
	I1019 17:37:36.571540  252004 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-633463"
	W1019 17:37:36.571553  252004 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:37:36.571572  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572383  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.572532  252004 addons.go:70] Setting dashboard=true in profile "newest-cni-633463"
	I1019 17:37:36.572549  252004 addons.go:239] Setting addon dashboard=true in "newest-cni-633463"
	W1019 17:37:36.572556  252004 addons.go:248] addon dashboard should already be in state true
	I1019 17:37:36.572583  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572955  252004 addons.go:70] Setting default-storageclass=true in profile "newest-cni-633463"
	I1019 17:37:36.572972  252004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-633463"
	I1019 17:37:36.573200  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.573322  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.576375  252004 out.go:179] * Verifying Kubernetes components...
	I1019 17:37:36.579575  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:36.622594  252004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:37:36.627897  252004 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:37:36.629343  252004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:36.629367  252004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:37:36.629433  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.630688  252004 addons.go:239] Setting addon default-storageclass=true in "newest-cni-633463"
	W1019 17:37:36.630710  252004 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:37:36.630735  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.631149  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.640030  252004 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:37:36.644156  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:37:36.644188  252004 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:37:36.644280  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.676441  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.689281  252004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:36.689318  252004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:37:36.689378  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.711052  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.743001  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.939083  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:37.024609  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:37:37.024646  252004 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:37:37.058170  252004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:37.108765  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:37.170398  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:37:37.170475  252004 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:37:37.281423  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:37:37.281489  252004 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:37:37.346612  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:37:37.346677  252004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:37:37.407870  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:37:37.407960  252004 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:37:37.439852  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:37:37.439928  252004 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:37:37.504803  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:37:37.504831  252004 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:37:37.542765  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:37:37.542784  252004 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:37:37.578305  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:37:37.578326  252004 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:37:37.612357  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:37:44.421984  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.48286648s)
	I1019 17:37:44.422059  252004 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.363857526s)
	I1019 17:37:44.422107  252004 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:37:44.422172  252004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:37:44.422264  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.31342206s)
	I1019 17:37:44.531139  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.918729251s)
	I1019 17:37:44.531460  252004 api_server.go:72] duration metric: took 7.960430335s to wait for apiserver process to appear ...
	I1019 17:37:44.531500  252004 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:37:44.531535  252004 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:37:44.537535  252004 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-633463 addons enable metrics-server
	
	I1019 17:37:44.542079  252004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:37:44.546675  252004 addons.go:515] duration metric: took 7.975281329s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 17:37:44.551149  252004 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:37:44.551185  252004 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:37:45.044245  252004 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:37:45.103398  252004 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:37:45.105713  252004 api_server.go:141] control plane version: v1.34.1
	I1019 17:37:45.105750  252004 api_server.go:131] duration metric: took 574.225344ms to wait for apiserver health ...
	I1019 17:37:45.105761  252004 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:37:45.117646  252004 system_pods.go:59] 8 kube-system pods found
	I1019 17:37:45.117709  252004 system_pods.go:61] "coredns-66bc5c9577-c4f4b" [05111d3d-bb2d-418d-8839-fd77dd6da259] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:45.117723  252004 system_pods.go:61] "etcd-newest-cni-633463" [6a5e2105-f5b2-42fe-b84e-b4fabe762787] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:37:45.117730  252004 system_pods.go:61] "kindnet-9zt9r" [225c1116-2e3f-4fe7-93d6-b3199509c1a8] Running
	I1019 17:37:45.117739  252004 system_pods.go:61] "kube-apiserver-newest-cni-633463" [ed52c336-ad74-4a2b-b340-80f71537080a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:37:45.117746  252004 system_pods.go:61] "kube-controller-manager-newest-cni-633463" [99395d0f-9a8b-4874-a0cc-9e1d8f64950e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:37:45.117753  252004 system_pods.go:61] "kube-proxy-gktcz" [ddc682d3-91d8-48e5-b254-cbb87e6f5106] Running
	I1019 17:37:45.117766  252004 system_pods.go:61] "kube-scheduler-newest-cni-633463" [f1e717aa-1eee-48e8-a48b-8980e8389603] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:37:45.117773  252004 system_pods.go:61] "storage-provisioner" [ba44ef1f-311c-409e-a01b-f15080f8ac35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:45.117783  252004 system_pods.go:74] duration metric: took 12.015535ms to wait for pod list to return data ...
	I1019 17:37:45.117793  252004 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:37:45.126959  252004 default_sa.go:45] found service account: "default"
	I1019 17:37:45.126993  252004 default_sa.go:55] duration metric: took 9.192621ms for default service account to be created ...
	I1019 17:37:45.127008  252004 kubeadm.go:587] duration metric: took 8.555978668s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:45.127028  252004 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:37:45.145607  252004 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:37:45.145662  252004 node_conditions.go:123] node cpu capacity is 2
	I1019 17:37:45.145677  252004 node_conditions.go:105] duration metric: took 18.642445ms to run NodePressure ...
	I1019 17:37:45.145698  252004 start.go:242] waiting for startup goroutines ...
	I1019 17:37:45.145706  252004 start.go:247] waiting for cluster config update ...
	I1019 17:37:45.145718  252004 start.go:256] writing updated cluster config ...
	I1019 17:37:45.146054  252004 ssh_runner.go:195] Run: rm -f paused
	I1019 17:37:45.257221  252004 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:37:45.261312  252004 out.go:179] * Done! kubectl is now configured to use "newest-cni-633463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.60334305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.627192703Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=227bd960-6688-4037-bb95-7ed36c0b42a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.644362374Z" level=info msg="Ran pod sandbox 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa with infra container: kube-system/kindnet-9zt9r/POD" id=227bd960-6688-4037-bb95-7ed36c0b42a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.645935006Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gktcz/POD" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.646111229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.650946108Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.655235444Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=239ee2bc-2c9d-41eb-85f0-2a765e66c2a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.658899156Z" level=info msg="Ran pod sandbox a99e2d28c556b7708b06cc8cf6a389ed06a50b34767f2d9c02332a1296d396b0 with infra container: kube-system/kube-proxy-gktcz/POD" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.6598011Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=76fff446-594d-4d8b-891f-2eb078bba7d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.667301089Z" level=info msg="Creating container: kube-system/kindnet-9zt9r/kindnet-cni" id=f313fe73-52ad-4673-b5f7-ba3821ee8be9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.667590578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.697479391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.697968366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.723713953Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=86dbd769-63ef-425b-bc2a-1a2251bd62e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.745375326Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c66d764c-f444-4dcf-904f-2039f8b76320 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.748062615Z" level=info msg="Creating container: kube-system/kube-proxy-gktcz/kube-proxy" id=19b5ac8d-ae30-4b48-b903-bdf83a20c09f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.748500734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.770526715Z" level=info msg="Created container edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3: kube-system/kindnet-9zt9r/kindnet-cni" id=f313fe73-52ad-4673-b5f7-ba3821ee8be9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.780994701Z" level=info msg="Starting container: edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3" id=6dd2920a-e10c-4d18-8f4a-893862ccb62f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.814132722Z" level=info msg="Started container" PID=1060 containerID=edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3 description=kube-system/kindnet-9zt9r/kindnet-cni id=6dd2920a-e10c-4d18-8f4a-893862ccb62f name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.81919203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.819859461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.964640039Z" level=info msg="Created container 4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200: kube-system/kube-proxy-gktcz/kube-proxy" id=19b5ac8d-ae30-4b48-b903-bdf83a20c09f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.975779567Z" level=info msg="Starting container: 4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200" id=8382387b-01f0-45d1-b64e-f3224b666663 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.97961778Z" level=info msg="Started container" PID=1071 containerID=4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200 description=kube-system/kube-proxy-gktcz/kube-proxy id=8382387b-01f0-45d1-b64e-f3224b666663 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a99e2d28c556b7708b06cc8cf6a389ed06a50b34767f2d9c02332a1296d396b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4b7df96eb28b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   a99e2d28c556b       kube-proxy-gktcz                            kube-system
	edb0c10bd96e6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   5a74404d4b7b6       kindnet-9zt9r                               kube-system
	42c0497fdaeaa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   168c4cb1821d5       kube-controller-manager-newest-cni-633463   kube-system
	8ef7387fc1701       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   4aa7edf88a52e       kube-scheduler-newest-cni-633463            kube-system
	9a825a8a6bd59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   50219b1cab252       etcd-newest-cni-633463                      kube-system
	1fc2f09faeca0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   eb41687a3d152       kube-apiserver-newest-cni-633463            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-633463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-633463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-633463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:37:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-633463
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-633463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e953ded1-d3da-4e1a-97c3-cbeb95b772c3
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-633463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-9zt9r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-633463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-633463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-gktcz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-633463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-633463 event: Registered Node newest-cni-633463 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-633463 event: Registered Node newest-cni-633463 in Controller
	
	
	==> dmesg <==
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	[Oct19 17:37] overlayfs: idmapped layers are currently not supported
	[ +27.886872] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760] <==
	{"level":"warn","ts":"2025-10-19T17:37:40.718033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.746810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.816105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.845312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.874919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.900078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.942280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.961031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.973406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.017286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.036960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.047342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.097116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.129231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.168612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.191894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.216209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.234004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.253736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.311559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.338186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.355939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.378702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.402027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.543512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:37:49 up  1:20,  0 user,  load average: 5.68, 4.37, 3.69
	Linux newest-cni-633463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3] <==
	I1019 17:37:44.007922       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:37:44.008503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:37:44.021587       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:37:44.021699       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:37:44.021745       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:37:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:37:44.218852       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:37:44.218938       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:37:44.218979       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:37:44.220154       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af] <==
	I1019 17:37:42.796806       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:37:42.796813       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:37:42.832853       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:37:42.833266       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:37:42.834787       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:37:42.842089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:37:42.849552       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:37:42.853977       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:37:42.854758       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:37:42.854770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:37:42.866151       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 17:37:42.878510       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:37:42.886163       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:37:43.373677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:37:43.413103       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:37:43.617239       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:37:43.848711       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:37:44.051027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:37:44.132100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:37:44.433611       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.143.97"}
	I1019 17:37:44.518433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.163.148"}
	I1019 17:37:46.213206       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:37:46.536901       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:37:46.594329       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:37:46.686321       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840] <==
	I1019 17:37:46.135953       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:37:46.136004       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:37:46.143690       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:46.144408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:46.144438       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:37:46.144445       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:37:46.144521       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:37:46.145467       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:37:46.145582       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:37:46.152684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:37:46.158285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:37:46.163801       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:37:46.163880       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:37:46.164010       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:37:46.172984       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:37:46.178630       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:37:46.181964       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:37:46.178682       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:37:46.178704       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:37:46.179268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:37:46.179254       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:37:46.183915       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:37:46.190015       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:37:46.200208       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:37:46.201457       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200] <==
	I1019 17:37:44.293441       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:37:44.597807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:37:44.698405       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:37:44.698495       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:37:44.698647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:37:44.783197       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:37:44.783334       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:37:44.871605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:37:44.872037       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:37:44.872231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:44.877290       1 config.go:200] "Starting service config controller"
	I1019 17:37:44.877351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:37:44.877396       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:37:44.877423       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:37:44.877478       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:37:44.877504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:37:44.878133       1 config.go:309] "Starting node config controller"
	I1019 17:37:44.880463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:37:44.880526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:37:44.978427       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:37:44.978518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:37:44.978604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87] <==
	I1019 17:37:39.964045       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:37:42.710984       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:37:42.713115       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:37:42.713148       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:37:42.713156       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:37:42.869178       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:37:42.883278       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:42.885712       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:37:42.885825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:42.885841       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:42.885856       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:37:42.989376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.116626     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-633463\" not found" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.614626     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866695     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866818     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866850     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.869911     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.944431     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-633463\" already exists" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.944465     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.956115     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-633463\" already exists" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.956150     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.969364     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-633463\" already exists" pod="kube-system/kube-apiserver-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.969396     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.981586     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-633463\" already exists" pod="kube-system/kube-controller-manager-newest-cni-633463"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.286646     729 apiserver.go:52] "Watching apiserver"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.313406     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404128     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-cni-cfg\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404172     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-xtables-lock\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404213     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-lib-modules\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404256     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-xtables-lock\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404274     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-lib-modules\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.451100     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: W1019 17:37:43.646656     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/crio-5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa WatchSource:0}: Error finding container 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa: Status 404 returned error can't find the container with id 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-633463 -n newest-cni-633463: exit status 2 (343.494327ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-633463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89: exit status 1 (82.174004ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-c4f4b" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-v8z7r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zcp89" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-633463
helpers_test.go:243: (dbg) docker inspect newest-cni-633463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	        "Created": "2025-10-19T17:36:48.723991016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252130,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T17:37:27.306815158Z",
	            "FinishedAt": "2025-10-19T17:37:26.436245868Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/hosts",
	        "LogPath": "/var/lib/docker/containers/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea-json.log",
	        "Name": "/newest-cni-633463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-633463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-633463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea",
	                "LowerDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b-init/diff:/var/lib/docker/overlay2/225abf494e9c5b91fc58a5603f38469238a5b978b55c574459b7726365a451a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85982fa217311fb34c1a41f99552089cf1b2df44d6c629d24198b7fec948229b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-633463",
	                "Source": "/var/lib/docker/volumes/newest-cni-633463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-633463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-633463",
	                "name.minikube.sigs.k8s.io": "newest-cni-633463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de8d79efe2ef4473549e8abe471d29c8da6cbf1d8a8493ff38092cfe6d3b83fd",
	            "SandboxKey": "/var/run/docker/netns/de8d79efe2ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-633463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:07:6b:1d:e9:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903462b71a8c585e1f826b3d07accd39a29c6c1814ddb40704a08f8813291f55",
	                    "EndpointID": "ef934d3ac8f3a53ca6755c491742e9a663f4566eca8bb2dc075b591e250f9e5f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-633463",
	                        "dc48a98a25fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463: exit status 2 (333.434567ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-633463 logs -n 25: (1.069745248s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-167748                                                                                                                                                                                                               │ disable-driver-mounts-167748 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:34 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:34 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-296314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │                     │
	│ stop    │ -p embed-certs-296314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:35 UTC │
	│ start   │ -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:35 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-370596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-370596 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ embed-certs-296314 image list --format=json                                                                                                                                                                                                   │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ pause   │ -p embed-certs-296314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │                     │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ delete  │ -p embed-certs-296314                                                                                                                                                                                                                         │ embed-certs-296314           │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:36 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:36 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-633463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ stop    │ -p newest-cni-633463 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-633463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ start   │ -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ default-k8s-diff-port-370596 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p default-k8s-diff-port-370596 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-370596                                                                                                                                                                                                               │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ delete  │ -p default-k8s-diff-port-370596                                                                                                                                                                                                               │ default-k8s-diff-port-370596 │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ image   │ newest-cni-633463 image list --format=json                                                                                                                                                                                                    │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │ 19 Oct 25 17:37 UTC │
	│ pause   │ -p newest-cni-633463 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-633463            │ jenkins │ v1.37.0 │ 19 Oct 25 17:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:37:27.032239  252004 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:37:27.032438  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032469  252004 out.go:374] Setting ErrFile to fd 2...
	I1019 17:37:27.032495  252004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:37:27.032763  252004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:37:27.033178  252004 out.go:368] Setting JSON to false
	I1019 17:37:27.034113  252004 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4795,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:37:27.034212  252004 start.go:143] virtualization:  
	I1019 17:37:27.039794  252004 out.go:179] * [newest-cni-633463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:37:27.043053  252004 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:37:27.043135  252004 notify.go:221] Checking for updates...
	I1019 17:37:27.049102  252004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:37:27.051961  252004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:27.054936  252004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:37:27.057816  252004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:37:27.060704  252004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:37:27.063995  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:27.064614  252004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:37:27.096144  252004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:37:27.096298  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.151308  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.14172198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.151422  252004 docker.go:319] overlay module found
	I1019 17:37:27.154606  252004 out.go:179] * Using the docker driver based on existing profile
	I1019 17:37:27.157314  252004 start.go:309] selected driver: docker
	I1019 17:37:27.157331  252004 start.go:930] validating driver "docker" against &{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.157428  252004 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:37:27.158143  252004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:37:27.221005  252004 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:37:27.211547247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:37:27.221371  252004 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:27.221408  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:27.221460  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:27.221500  252004 start.go:353] cluster config:
	{Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:27.224716  252004 out.go:179] * Starting "newest-cni-633463" primary control-plane node in "newest-cni-633463" cluster
	I1019 17:37:27.227526  252004 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 17:37:27.230600  252004 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:37:27.233300  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:27.233356  252004 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 17:37:27.233386  252004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:37:27.233392  252004 cache.go:59] Caching tarball of preloaded images
	I1019 17:37:27.233571  252004 preload.go:233] Found /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 17:37:27.233580  252004 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:37:27.233695  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.253189  252004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:37:27.253215  252004 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:37:27.253229  252004 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:37:27.253253  252004 start.go:360] acquireMachinesLock for newest-cni-633463: {Name:mk5bb6cb5b9b89fc5f7e65da679c1a55c56b4fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:37:27.253329  252004 start.go:364] duration metric: took 36.292µs to acquireMachinesLock for "newest-cni-633463"
	I1019 17:37:27.253353  252004 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:37:27.253363  252004 fix.go:54] fixHost starting: 
	I1019 17:37:27.253610  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.270778  252004 fix.go:112] recreateIfNeeded on newest-cni-633463: state=Stopped err=<nil>
	W1019 17:37:27.270810  252004 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:37:27.274260  252004 out.go:252] * Restarting existing docker container for "newest-cni-633463" ...
	I1019 17:37:27.274384  252004 cli_runner.go:164] Run: docker start newest-cni-633463
	I1019 17:37:27.550197  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:27.572138  252004 kic.go:430] container "newest-cni-633463" state is running.
	I1019 17:37:27.572522  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:27.598627  252004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/config.json ...
	I1019 17:37:27.598852  252004 machine.go:94] provisionDockerMachine start ...
	I1019 17:37:27.598917  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:27.621212  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:27.621530  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:27.621539  252004 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:37:27.622140  252004 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35560->127.0.0.1:33128: read: connection reset by peer
	I1019 17:37:30.774430  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.774458  252004 ubuntu.go:182] provisioning hostname "newest-cni-633463"
	I1019 17:37:30.774529  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.793360  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.793655  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.793671  252004 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-633463 && echo "newest-cni-633463" | sudo tee /etc/hostname
	I1019 17:37:30.956546  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-633463
	
	I1019 17:37:30.956622  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:30.978519  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:30.978856  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:30.978879  252004 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-633463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-633463/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-633463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:37:31.143453  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:37:31.143482  252004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-2307/.minikube}
	I1019 17:37:31.143503  252004 ubuntu.go:190] setting up certificates
	I1019 17:37:31.143530  252004 provision.go:84] configureAuth start
	I1019 17:37:31.143603  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:31.162905  252004 provision.go:143] copyHostCerts
	I1019 17:37:31.162977  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem, removing ...
	I1019 17:37:31.163001  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem
	I1019 17:37:31.163081  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/key.pem (1679 bytes)
	I1019 17:37:31.163199  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem, removing ...
	I1019 17:37:31.163210  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem
	I1019 17:37:31.163237  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/ca.pem (1082 bytes)
	I1019 17:37:31.163303  252004 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem, removing ...
	I1019 17:37:31.163313  252004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem
	I1019 17:37:31.163341  252004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-2307/.minikube/cert.pem (1123 bytes)
	I1019 17:37:31.163402  252004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-633463 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-633463]
	I1019 17:37:32.238364  252004 provision.go:177] copyRemoteCerts
	I1019 17:37:32.238433  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:37:32.238477  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.259782  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.363454  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:37:32.382228  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:37:32.402346  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:37:32.434653  252004 provision.go:87] duration metric: took 1.291102282s to configureAuth
	I1019 17:37:32.434677  252004 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:37:32.434877  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:32.434994  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.460163  252004 main.go:143] libmachine: Using SSH client type: native
	I1019 17:37:32.460471  252004 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:37:32.460484  252004 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:37:32.812582  252004 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:37:32.812601  252004 machine.go:97] duration metric: took 5.213739158s to provisionDockerMachine
	I1019 17:37:32.812612  252004 start.go:293] postStartSetup for "newest-cni-633463" (driver="docker")
	I1019 17:37:32.812623  252004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:37:32.812687  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:37:32.812731  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:32.845647  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:32.958253  252004 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:37:32.962641  252004 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:37:32.962669  252004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:37:32.962681  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/addons for local assets ...
	I1019 17:37:32.962741  252004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-2307/.minikube/files for local assets ...
	I1019 17:37:32.962825  252004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem -> 41112.pem in /etc/ssl/certs
	I1019 17:37:32.962929  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:37:32.982498  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:33.019033  252004 start.go:296] duration metric: took 206.405729ms for postStartSetup
	I1019 17:37:33.019119  252004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:37:33.019182  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.060276  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.167952  252004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:37:33.176969  252004 fix.go:56] duration metric: took 5.923599942s for fixHost
	I1019 17:37:33.176995  252004 start.go:83] releasing machines lock for "newest-cni-633463", held for 5.923653801s
	I1019 17:37:33.177082  252004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-633463
	I1019 17:37:33.203375  252004 ssh_runner.go:195] Run: cat /version.json
	I1019 17:37:33.203411  252004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:37:33.203489  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.203426  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:33.248837  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.249412  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:33.469299  252004 ssh_runner.go:195] Run: systemctl --version
	I1019 17:37:33.477000  252004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:37:33.515118  252004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:37:33.520482  252004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:37:33.520556  252004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:37:33.529508  252004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:37:33.529534  252004 start.go:496] detecting cgroup driver to use...
	I1019 17:37:33.529596  252004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 17:37:33.529659  252004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:37:33.550450  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:37:33.567224  252004 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:37:33.567330  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:37:33.587367  252004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:37:33.603412  252004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:37:33.721296  252004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:37:33.835246  252004 docker.go:234] disabling docker service ...
	I1019 17:37:33.835350  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:37:33.850207  252004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:37:33.864410  252004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:37:33.985866  252004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:37:34.153123  252004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:37:34.167027  252004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:37:34.183139  252004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:37:34.183251  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.193611  252004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:37:34.193726  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.203528  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.215302  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.225045  252004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:37:34.233944  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.244342  252004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.252329  252004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:37:34.263222  252004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:37:34.273315  252004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:37:34.281853  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:34.398185  252004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:37:34.534169  252004 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:37:34.534291  252004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:37:34.538246  252004 start.go:564] Will wait 60s for crictl version
	I1019 17:37:34.538363  252004 ssh_runner.go:195] Run: which crictl
	I1019 17:37:34.542091  252004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:37:34.567928  252004 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 17:37:34.568078  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.597233  252004 ssh_runner.go:195] Run: crio --version
	I1019 17:37:34.633625  252004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 17:37:34.636469  252004 cli_runner.go:164] Run: docker network inspect newest-cni-633463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:37:34.651969  252004 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 17:37:34.656388  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.669690  252004 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 17:37:34.672465  252004 kubeadm.go:884] updating cluster {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:37:34.672612  252004 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:37:34.672681  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.749018  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.749089  252004 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:37:34.749162  252004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:37:34.794191  252004 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:37:34.794264  252004 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:37:34.794288  252004 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 17:37:34.794413  252004 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-633463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:37:34.794518  252004 ssh_runner.go:195] Run: crio config
	I1019 17:37:34.860893  252004 cni.go:84] Creating CNI manager for ""
	I1019 17:37:34.860958  252004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 17:37:34.860994  252004 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 17:37:34.861034  252004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-633463 NodeName:newest-cni-633463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:37:34.861183  252004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-633463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:37:34.861285  252004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:37:34.873668  252004 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:37:34.873773  252004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:37:34.884115  252004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 17:37:34.899167  252004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:37:34.913418  252004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 17:37:34.928410  252004 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 17:37:34.932497  252004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:37:34.942938  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:35.118309  252004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:35.151281  252004 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463 for IP: 192.168.85.2
	I1019 17:37:35.151298  252004 certs.go:195] generating shared ca certs ...
	I1019 17:37:35.151314  252004 certs.go:227] acquiring lock for ca certs: {Name:mke9eecbbfdeac0a1f8a905133029fd7d119de68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:35.151434  252004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key
	I1019 17:37:35.151469  252004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key
	I1019 17:37:35.151476  252004 certs.go:257] generating profile certs ...
	I1019 17:37:35.151552  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/client.key
	I1019 17:37:35.151601  252004 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key.1ea41287
	I1019 17:37:35.151636  252004 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key
	I1019 17:37:35.151753  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem (1338 bytes)
	W1019 17:37:35.151783  252004 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111_empty.pem, impossibly tiny 0 bytes
	I1019 17:37:35.151792  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 17:37:35.151815  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:37:35.151839  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:37:35.151860  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/certs/key.pem (1679 bytes)
	I1019 17:37:35.151900  252004 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem (1708 bytes)
	I1019 17:37:35.152504  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:37:35.200597  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:37:35.230333  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:37:35.258952  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 17:37:35.280905  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 17:37:35.335355  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:37:35.396523  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:37:35.433499  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/newest-cni-633463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:37:35.470268  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/ssl/certs/41112.pem --> /usr/share/ca-certificates/41112.pem (1708 bytes)
	I1019 17:37:35.489518  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:37:35.528313  252004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-2307/.minikube/certs/4111.pem --> /usr/share/ca-certificates/4111.pem (1338 bytes)
	I1019 17:37:35.559597  252004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:37:35.576016  252004 ssh_runner.go:195] Run: openssl version
	I1019 17:37:35.583318  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41112.pem && ln -fs /usr/share/ca-certificates/41112.pem /etc/ssl/certs/41112.pem"
	I1019 17:37:35.593494  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600654  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:28 /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.600720  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41112.pem
	I1019 17:37:35.659469  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41112.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:37:35.671808  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:37:35.681887  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686331  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.686396  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:37:35.744102  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:37:35.753718  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4111.pem && ln -fs /usr/share/ca-certificates/4111.pem /etc/ssl/certs/4111.pem"
	I1019 17:37:35.763391  252004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767365  252004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:28 /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.767433  252004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4111.pem
	I1019 17:37:35.817437  252004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4111.pem /etc/ssl/certs/51391683.0"
	I1019 17:37:35.825631  252004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:37:35.829562  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:37:35.895384  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:37:35.974594  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:37:36.095726  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:37:36.221650  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:37:36.343301  252004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:37:36.432978  252004 kubeadm.go:401] StartCluster: {Name:newest-cni-633463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-633463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:37:36.433073  252004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:37:36.433140  252004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:37:36.498472  252004 cri.go:89] found id: "42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840"
	I1019 17:37:36.498496  252004 cri.go:89] found id: "8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87"
	I1019 17:37:36.498502  252004 cri.go:89] found id: "9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760"
	I1019 17:37:36.498506  252004 cri.go:89] found id: "1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af"
	I1019 17:37:36.498509  252004 cri.go:89] found id: ""
	I1019 17:37:36.498581  252004 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 17:37:36.519309  252004 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T17:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 17:37:36.519401  252004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:37:36.535125  252004 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:37:36.535154  252004 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:37:36.535202  252004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:37:36.549734  252004 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:37:36.550325  252004 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-633463" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.550710  252004 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-633463" cluster setting kubeconfig missing "newest-cni-633463" context setting]
	I1019 17:37:36.551159  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.552771  252004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:37:36.569733  252004 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 17:37:36.569768  252004 kubeadm.go:602] duration metric: took 34.607778ms to restartPrimaryControlPlane
	I1019 17:37:36.569777  252004 kubeadm.go:403] duration metric: took 136.80791ms to StartCluster
	I1019 17:37:36.569791  252004 settings.go:142] acquiring lock: {Name:mk691d9389e515688cf39cfe1fbaeaa24a3ed765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.569851  252004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:37:36.570800  252004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-2307/kubeconfig: {Name:mk559185415f968598c66ed66f3ee68f830f81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:37:36.571001  252004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:37:36.571349  252004 config.go:182] Loaded profile config "newest-cni-633463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:37:36.571375  252004 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:37:36.571528  252004 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-633463"
	I1019 17:37:36.571540  252004 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-633463"
	W1019 17:37:36.571553  252004 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:37:36.571572  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572383  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.572532  252004 addons.go:70] Setting dashboard=true in profile "newest-cni-633463"
	I1019 17:37:36.572549  252004 addons.go:239] Setting addon dashboard=true in "newest-cni-633463"
	W1019 17:37:36.572556  252004 addons.go:248] addon dashboard should already be in state true
	I1019 17:37:36.572583  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.572955  252004 addons.go:70] Setting default-storageclass=true in profile "newest-cni-633463"
	I1019 17:37:36.572972  252004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-633463"
	I1019 17:37:36.573200  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.573322  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.576375  252004 out.go:179] * Verifying Kubernetes components...
	I1019 17:37:36.579575  252004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:37:36.622594  252004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:37:36.627897  252004 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 17:37:36.629343  252004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:36.629367  252004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:37:36.629433  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.630688  252004 addons.go:239] Setting addon default-storageclass=true in "newest-cni-633463"
	W1019 17:37:36.630710  252004 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:37:36.630735  252004 host.go:66] Checking if "newest-cni-633463" exists ...
	I1019 17:37:36.631149  252004 cli_runner.go:164] Run: docker container inspect newest-cni-633463 --format={{.State.Status}}
	I1019 17:37:36.640030  252004 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 17:37:36.644156  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 17:37:36.644188  252004 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 17:37:36.644280  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.676441  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.689281  252004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:36.689318  252004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:37:36.689378  252004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-633463
	I1019 17:37:36.711052  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.743001  252004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/newest-cni-633463/id_rsa Username:docker}
	I1019 17:37:36.939083  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:37:37.024609  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 17:37:37.024646  252004 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 17:37:37.058170  252004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:37:37.108765  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:37:37.170398  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 17:37:37.170475  252004 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 17:37:37.281423  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 17:37:37.281489  252004 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 17:37:37.346612  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 17:37:37.346677  252004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 17:37:37.407870  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 17:37:37.407960  252004 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 17:37:37.439852  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 17:37:37.439928  252004 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 17:37:37.504803  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 17:37:37.504831  252004 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 17:37:37.542765  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 17:37:37.542784  252004 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 17:37:37.578305  252004 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:37:37.578326  252004 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 17:37:37.612357  252004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 17:37:44.421984  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.48286648s)
	I1019 17:37:44.422059  252004 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.363857526s)
	I1019 17:37:44.422107  252004 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:37:44.422172  252004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:37:44.422264  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.31342206s)
	I1019 17:37:44.531139  252004 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.918729251s)
	I1019 17:37:44.531460  252004 api_server.go:72] duration metric: took 7.960430335s to wait for apiserver process to appear ...
	I1019 17:37:44.531500  252004 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:37:44.531535  252004 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:37:44.537535  252004 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-633463 addons enable metrics-server
	
	I1019 17:37:44.542079  252004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 17:37:44.546675  252004 addons.go:515] duration metric: took 7.975281329s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 17:37:44.551149  252004 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:37:44.551185  252004 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:37:45.044245  252004 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:37:45.103398  252004 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 17:37:45.105713  252004 api_server.go:141] control plane version: v1.34.1
	I1019 17:37:45.105750  252004 api_server.go:131] duration metric: took 574.225344ms to wait for apiserver health ...
	I1019 17:37:45.105761  252004 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:37:45.117646  252004 system_pods.go:59] 8 kube-system pods found
	I1019 17:37:45.117709  252004 system_pods.go:61] "coredns-66bc5c9577-c4f4b" [05111d3d-bb2d-418d-8839-fd77dd6da259] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:45.117723  252004 system_pods.go:61] "etcd-newest-cni-633463" [6a5e2105-f5b2-42fe-b84e-b4fabe762787] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:37:45.117730  252004 system_pods.go:61] "kindnet-9zt9r" [225c1116-2e3f-4fe7-93d6-b3199509c1a8] Running
	I1019 17:37:45.117739  252004 system_pods.go:61] "kube-apiserver-newest-cni-633463" [ed52c336-ad74-4a2b-b340-80f71537080a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:37:45.117746  252004 system_pods.go:61] "kube-controller-manager-newest-cni-633463" [99395d0f-9a8b-4874-a0cc-9e1d8f64950e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:37:45.117753  252004 system_pods.go:61] "kube-proxy-gktcz" [ddc682d3-91d8-48e5-b254-cbb87e6f5106] Running
	I1019 17:37:45.117766  252004 system_pods.go:61] "kube-scheduler-newest-cni-633463" [f1e717aa-1eee-48e8-a48b-8980e8389603] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:37:45.117773  252004 system_pods.go:61] "storage-provisioner" [ba44ef1f-311c-409e-a01b-f15080f8ac35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 17:37:45.117783  252004 system_pods.go:74] duration metric: took 12.015535ms to wait for pod list to return data ...
	I1019 17:37:45.117793  252004 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:37:45.126959  252004 default_sa.go:45] found service account: "default"
	I1019 17:37:45.126993  252004 default_sa.go:55] duration metric: took 9.192621ms for default service account to be created ...
	I1019 17:37:45.127008  252004 kubeadm.go:587] duration metric: took 8.555978668s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:37:45.127028  252004 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:37:45.145607  252004 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 17:37:45.145662  252004 node_conditions.go:123] node cpu capacity is 2
	I1019 17:37:45.145677  252004 node_conditions.go:105] duration metric: took 18.642445ms to run NodePressure ...
	I1019 17:37:45.145698  252004 start.go:242] waiting for startup goroutines ...
	I1019 17:37:45.145706  252004 start.go:247] waiting for cluster config update ...
	I1019 17:37:45.145718  252004 start.go:256] writing updated cluster config ...
	I1019 17:37:45.146054  252004 ssh_runner.go:195] Run: rm -f paused
	I1019 17:37:45.257221  252004 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 17:37:45.261312  252004 out.go:179] * Done! kubectl is now configured to use "newest-cni-633463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.60334305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.627192703Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=227bd960-6688-4037-bb95-7ed36c0b42a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.644362374Z" level=info msg="Ran pod sandbox 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa with infra container: kube-system/kindnet-9zt9r/POD" id=227bd960-6688-4037-bb95-7ed36c0b42a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.645935006Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gktcz/POD" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.646111229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.650946108Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.655235444Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=239ee2bc-2c9d-41eb-85f0-2a765e66c2a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.658899156Z" level=info msg="Ran pod sandbox a99e2d28c556b7708b06cc8cf6a389ed06a50b34767f2d9c02332a1296d396b0 with infra container: kube-system/kube-proxy-gktcz/POD" id=af581991-43da-4c9b-8b70-f370df96bd02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.6598011Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=76fff446-594d-4d8b-891f-2eb078bba7d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.667301089Z" level=info msg="Creating container: kube-system/kindnet-9zt9r/kindnet-cni" id=f313fe73-52ad-4673-b5f7-ba3821ee8be9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.667590578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.697479391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.697968366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.723713953Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=86dbd769-63ef-425b-bc2a-1a2251bd62e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.745375326Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c66d764c-f444-4dcf-904f-2039f8b76320 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.748062615Z" level=info msg="Creating container: kube-system/kube-proxy-gktcz/kube-proxy" id=19b5ac8d-ae30-4b48-b903-bdf83a20c09f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.748500734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.770526715Z" level=info msg="Created container edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3: kube-system/kindnet-9zt9r/kindnet-cni" id=f313fe73-52ad-4673-b5f7-ba3821ee8be9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.780994701Z" level=info msg="Starting container: edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3" id=6dd2920a-e10c-4d18-8f4a-893862ccb62f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.814132722Z" level=info msg="Started container" PID=1060 containerID=edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3 description=kube-system/kindnet-9zt9r/kindnet-cni id=6dd2920a-e10c-4d18-8f4a-893862ccb62f name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.81919203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.819859461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.964640039Z" level=info msg="Created container 4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200: kube-system/kube-proxy-gktcz/kube-proxy" id=19b5ac8d-ae30-4b48-b903-bdf83a20c09f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.975779567Z" level=info msg="Starting container: 4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200" id=8382387b-01f0-45d1-b64e-f3224b666663 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 17:37:43 newest-cni-633463 crio[613]: time="2025-10-19T17:37:43.97961778Z" level=info msg="Started container" PID=1071 containerID=4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200 description=kube-system/kube-proxy-gktcz/kube-proxy id=8382387b-01f0-45d1-b64e-f3224b666663 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a99e2d28c556b7708b06cc8cf6a389ed06a50b34767f2d9c02332a1296d396b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4b7df96eb28b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   a99e2d28c556b       kube-proxy-gktcz                            kube-system
	edb0c10bd96e6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   5a74404d4b7b6       kindnet-9zt9r                               kube-system
	42c0497fdaeaa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   168c4cb1821d5       kube-controller-manager-newest-cni-633463   kube-system
	8ef7387fc1701       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   4aa7edf88a52e       kube-scheduler-newest-cni-633463            kube-system
	9a825a8a6bd59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   50219b1cab252       etcd-newest-cni-633463                      kube-system
	1fc2f09faeca0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   eb41687a3d152       kube-apiserver-newest-cni-633463            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-633463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-633463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=newest-cni-633463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:37:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-633463
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 17:37:42 +0000   Sun, 19 Oct 2025 17:37:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-633463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e953ded1-d3da-4e1a-97c3-cbeb95b772c3
	  Boot ID:                    cfd7430e-7038-44cf-9fb8-784318dc677e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-633463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-9zt9r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-633463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-633463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-gktcz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-633463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-633463 event: Registered Node newest-cni-633463 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-633463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-633463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-633463 event: Registered Node newest-cni-633463 in Controller
	
	
	==> dmesg <==
	[  +2.251798] overlayfs: idmapped layers are currently not supported
	[Oct19 17:16] overlayfs: idmapped layers are currently not supported
	[Oct19 17:17] overlayfs: idmapped layers are currently not supported
	[  +1.279896] overlayfs: idmapped layers are currently not supported
	[Oct19 17:18] overlayfs: idmapped layers are currently not supported
	[ +36.372879] overlayfs: idmapped layers are currently not supported
	[Oct19 17:19] overlayfs: idmapped layers are currently not supported
	[Oct19 17:24] overlayfs: idmapped layers are currently not supported
	[Oct19 17:25] overlayfs: idmapped layers are currently not supported
	[Oct19 17:26] overlayfs: idmapped layers are currently not supported
	[Oct19 17:27] overlayfs: idmapped layers are currently not supported
	[Oct19 17:28] overlayfs: idmapped layers are currently not supported
	[  +6.438537] hrtimer: interrupt took 32813933 ns
	[Oct19 17:29] overlayfs: idmapped layers are currently not supported
	[Oct19 17:30] overlayfs: idmapped layers are currently not supported
	[ +11.588989] overlayfs: idmapped layers are currently not supported
	[Oct19 17:31] overlayfs: idmapped layers are currently not supported
	[Oct19 17:32] overlayfs: idmapped layers are currently not supported
	[Oct19 17:33] overlayfs: idmapped layers are currently not supported
	[ +26.810052] overlayfs: idmapped layers are currently not supported
	[Oct19 17:34] overlayfs: idmapped layers are currently not supported
	[Oct19 17:35] overlayfs: idmapped layers are currently not supported
	[Oct19 17:36] overlayfs: idmapped layers are currently not supported
	[Oct19 17:37] overlayfs: idmapped layers are currently not supported
	[ +27.886872] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a825a8a6bd59063b51e6c3bc6f2cf81a6e132e5391db8302696b9ee0703d760] <==
	{"level":"warn","ts":"2025-10-19T17:37:40.718033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.746810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.816105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.845312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.874919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.900078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.942280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.961031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:40.973406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.017286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.036960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.047342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.097116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.129231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.168612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.191894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.216209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.234004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.253736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.311559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.338186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.355939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.378702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.402027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:37:41.543512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:37:50 up  1:20,  0 user,  load average: 5.68, 4.37, 3.69
	Linux newest-cni-633463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [edb0c10bd96e6a233d5db0cc5af3e55d75c346a8b93069bb7933ec6b91cbd6a3] <==
	I1019 17:37:44.007922       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 17:37:44.008503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 17:37:44.021587       1 main.go:148] setting mtu 1500 for CNI 
	I1019 17:37:44.021699       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 17:37:44.021745       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T17:37:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 17:37:44.218852       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 17:37:44.218938       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 17:37:44.218979       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 17:37:44.220154       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1fc2f09faeca0d391549f1db536068ed44effc7d6871bc5f71421a0b57b3a5af] <==
	I1019 17:37:42.796806       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:37:42.796813       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:37:42.832853       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:37:42.833266       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:37:42.834787       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:37:42.842089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:37:42.849552       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:37:42.853977       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:37:42.854758       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:37:42.854770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:37:42.866151       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 17:37:42.878510       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:37:42.886163       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:37:43.373677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:37:43.413103       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:37:43.617239       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 17:37:43.848711       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:37:44.051027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:37:44.132100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:37:44.433611       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.143.97"}
	I1019 17:37:44.518433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.163.148"}
	I1019 17:37:46.213206       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:37:46.536901       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:37:46.594329       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:37:46.686321       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [42c0497fdaeaab2cbe2151966b50ab78bb0c3fcd1dc38f87ffed21786acc1840] <==
	I1019 17:37:46.135953       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 17:37:46.136004       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:37:46.143690       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:46.144408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:37:46.144438       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:37:46.144445       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:37:46.144521       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 17:37:46.145467       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 17:37:46.145582       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 17:37:46.152684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 17:37:46.158285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 17:37:46.163801       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 17:37:46.163880       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 17:37:46.164010       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 17:37:46.172984       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:37:46.178630       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:37:46.181964       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 17:37:46.178682       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:37:46.178704       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 17:37:46.179268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 17:37:46.179254       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 17:37:46.183915       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 17:37:46.190015       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:37:46.200208       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:37:46.201457       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [4b7df96eb28b4708e4c2abd56d95690f26c53f31fd65b50b73ac8b665b443200] <==
	I1019 17:37:44.293441       1 server_linux.go:53] "Using iptables proxy"
	I1019 17:37:44.597807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:37:44.698405       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:37:44.698495       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 17:37:44.698647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:37:44.783197       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 17:37:44.783334       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:37:44.871605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:37:44.872037       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:37:44.872231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:44.877290       1 config.go:200] "Starting service config controller"
	I1019 17:37:44.877351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:37:44.877396       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:37:44.877423       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:37:44.877478       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:37:44.877504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:37:44.878133       1 config.go:309] "Starting node config controller"
	I1019 17:37:44.880463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:37:44.880526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 17:37:44.978427       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:37:44.978518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:37:44.978604       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8ef7387fc1701d70af2887f0cf4cfe3b885bc5af4949d767e8453ebd18d00d87] <==
	I1019 17:37:39.964045       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:37:42.710984       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:37:42.713115       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:37:42.713148       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:37:42.713156       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:37:42.869178       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:37:42.883278       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:37:42.885712       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:37:42.885825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:42.885841       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:37:42.885856       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:37:42.989376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.116626     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-633463\" not found" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.614626     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866695     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866818     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.866850     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.869911     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.944431     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-633463\" already exists" pod="kube-system/kube-scheduler-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.944465     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.956115     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-633463\" already exists" pod="kube-system/etcd-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.956150     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.969364     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-633463\" already exists" pod="kube-system/kube-apiserver-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: I1019 17:37:42.969396     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-633463"
	Oct 19 17:37:42 newest-cni-633463 kubelet[729]: E1019 17:37:42.981586     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-633463\" already exists" pod="kube-system/kube-controller-manager-newest-cni-633463"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.286646     729 apiserver.go:52] "Watching apiserver"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.313406     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404128     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-cni-cfg\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404172     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-xtables-lock\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404213     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-lib-modules\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404256     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc682d3-91d8-48e5-b254-cbb87e6f5106-xtables-lock\") pod \"kube-proxy-gktcz\" (UID: \"ddc682d3-91d8-48e5-b254-cbb87e6f5106\") " pod="kube-system/kube-proxy-gktcz"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.404274     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225c1116-2e3f-4fe7-93d6-b3199509c1a8-lib-modules\") pod \"kindnet-9zt9r\" (UID: \"225c1116-2e3f-4fe7-93d6-b3199509c1a8\") " pod="kube-system/kindnet-9zt9r"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: I1019 17:37:43.451100     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 17:37:43 newest-cni-633463 kubelet[729]: W1019 17:37:43.646656     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dc48a98a25fc7f3c1945233d9c1787f26e7c46f1719c3f67ceb4d37d986fe3ea/crio-5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa WatchSource:0}: Error finding container 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa: Status 404 returned error can't find the container with id 5a74404d4b7b6b7c84dbc1f0067c1c6c7b693c7ec966427045d97eb3a40d0efa
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 17:37:46 newest-cni-633463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-633463 -n newest-cni-633463
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-633463 -n newest-cni-633463: exit status 2 (358.532583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-633463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89: exit status 1 (83.97301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-c4f4b" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-v8z7r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zcp89" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-633463 describe pod coredns-66bc5c9577-c4f4b storage-provisioner dashboard-metrics-scraper-6ffb444bf9-v8z7r kubernetes-dashboard-855c9754f9-zcp89: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.78s)

                                                
                                    

Test pass (259/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 6.91
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 178.94
31 TestAddons/serial/GCPAuth/Namespaces 0.26
32 TestAddons/serial/GCPAuth/FakeCredentials 8.88
48 TestAddons/StoppedEnableDisable 12.42
49 TestCertOptions 36.81
50 TestCertExpiration 242.1
52 TestForceSystemdFlag 51.88
53 TestForceSystemdEnv 43.64
59 TestErrorSpam/setup 30.54
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.16
62 TestErrorSpam/pause 5.49
63 TestErrorSpam/unpause 5.18
64 TestErrorSpam/stop 1.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 85.04
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 16.95
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.14
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
76 TestFunctional/serial/CacheCmd/cache/add_local 1.11
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 35.99
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.47
88 TestFunctional/serial/InvalidService 4.51
90 TestFunctional/parallel/ConfigCmd 0.45
92 TestFunctional/parallel/DryRun 0.66
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.3
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 23.84
102 TestFunctional/parallel/SSHCmd 0.79
103 TestFunctional/parallel/CpCmd 2.57
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.7
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
114 TestFunctional/parallel/License 0.45
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.57
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.43
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
130 TestFunctional/parallel/MountCmd/any-port 6.92
131 TestFunctional/parallel/MountCmd/specific-port 1.94
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.38
133 TestFunctional/parallel/ServiceCmd/List 0.59
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.14
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
145 TestFunctional/parallel/ImageCommands/Setup 2.42
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 203.36
164 TestMultiControlPlane/serial/DeployApp 7.23
165 TestMultiControlPlane/serial/PingHostFromPods 1.47
166 TestMultiControlPlane/serial/AddWorkerNode 60.68
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
169 TestMultiControlPlane/serial/CopyFile 20.3
170 TestMultiControlPlane/serial/StopSecondaryNode 12.85
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
172 TestMultiControlPlane/serial/RestartSecondaryNode 35.29
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.79
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.18
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.97
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 36.11
178 TestMultiControlPlane/serial/RestartCluster 67.23
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
180 TestMultiControlPlane/serial/AddSecondaryNode 80.03
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestJSONOutput/start/Command 79.96
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.76
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 39.04
211 TestKicCustomNetwork/use_default_bridge_network 42.84
212 TestKicExistingNetwork 35.27
213 TestKicCustomSubnet 31.95
214 TestKicStaticIP 36.37
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 76.66
219 TestMountStart/serial/StartWithMountFirst 7.12
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.45
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.14
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 113.91
231 TestMultiNode/serial/DeployApp2Nodes 4.74
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 56.82
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.42
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 7.97
239 TestMultiNode/serial/RestartKeepsNodes 73.32
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 23.98
242 TestMultiNode/serial/RestartMultiNode 57.18
243 TestMultiNode/serial/ValidateNameConflict 36.27
248 TestPreload 127.05
250 TestScheduledStopUnix 110.94
253 TestInsufficientStorage 13.76
254 TestRunningBinaryUpgrade 57.79
256 TestKubernetesUpgrade 350.58
257 TestMissingContainerUpgrade 102.7
259 TestPause/serial/Start 88.53
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
262 TestNoKubernetes/serial/StartWithK8s 40.64
263 TestNoKubernetes/serial/StartWithStopK8s 19.11
264 TestNoKubernetes/serial/Start 6.19
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
266 TestNoKubernetes/serial/ProfileList 1.14
267 TestNoKubernetes/serial/Stop 1.32
268 TestNoKubernetes/serial/StartNoArgs 7.5
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
277 TestNetworkPlugins/group/false 3.8
281 TestPause/serial/SecondStartNoReconfiguration 24.07
283 TestStoppedBinaryUpgrade/Setup 1.32
284 TestStoppedBinaryUpgrade/Upgrade 63.7
292 TestNetworkPlugins/group/auto/Start 92.45
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.68
294 TestNetworkPlugins/group/kindnet/Start 83.96
295 TestNetworkPlugins/group/auto/KubeletFlags 0.3
296 TestNetworkPlugins/group/auto/NetCatPod 10.34
297 TestNetworkPlugins/group/auto/DNS 0.26
298 TestNetworkPlugins/group/auto/Localhost 0.16
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/calico/Start 73.39
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
304 TestNetworkPlugins/group/kindnet/DNS 0.22
305 TestNetworkPlugins/group/kindnet/Localhost 0.19
306 TestNetworkPlugins/group/kindnet/HairPin 0.24
307 TestNetworkPlugins/group/custom-flannel/Start 60.23
308 TestNetworkPlugins/group/calico/ControllerPod 6.02
309 TestNetworkPlugins/group/calico/KubeletFlags 0.43
310 TestNetworkPlugins/group/calico/NetCatPod 13.3
311 TestNetworkPlugins/group/calico/DNS 0.19
312 TestNetworkPlugins/group/calico/Localhost 0.14
313 TestNetworkPlugins/group/calico/HairPin 0.14
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
316 TestNetworkPlugins/group/enable-default-cni/Start 85.48
317 TestNetworkPlugins/group/custom-flannel/DNS 0.25
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
320 TestNetworkPlugins/group/flannel/Start 59.51
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
325 TestNetworkPlugins/group/flannel/NetCatPod 10.26
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
329 TestNetworkPlugins/group/flannel/DNS 0.19
330 TestNetworkPlugins/group/flannel/Localhost 0.17
331 TestNetworkPlugins/group/flannel/HairPin 0.17
332 TestNetworkPlugins/group/bridge/Start 54.64
334 TestStartStop/group/old-k8s-version/serial/FirstStart 68.05
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
336 TestNetworkPlugins/group/bridge/NetCatPod 11.32
337 TestNetworkPlugins/group/bridge/DNS 0.19
338 TestNetworkPlugins/group/bridge/Localhost 0.14
339 TestNetworkPlugins/group/bridge/HairPin 0.14
340 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
343 TestStartStop/group/no-preload/serial/FirstStart 67.5
344 TestStartStop/group/old-k8s-version/serial/Stop 13.88
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
346 TestStartStop/group/old-k8s-version/serial/SecondStart 61.02
347 TestStartStop/group/no-preload/serial/DeployApp 9.38
349 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/no-preload/serial/Stop 12.13
351 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
352 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
355 TestStartStop/group/no-preload/serial/SecondStart 53.43
357 TestStartStop/group/embed-certs/serial/FirstStart 85.9
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
363 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.62
364 TestStartStop/group/embed-certs/serial/DeployApp 8.45
366 TestStartStop/group/embed-certs/serial/Stop 12.24
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
368 TestStartStop/group/embed-certs/serial/SecondStart 58.26
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.4
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
379 TestStartStop/group/newest-cni/serial/FirstStart 41.03
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/Stop 1.56
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 18.76
386 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (7.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-860860 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-860860 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.745590605s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 16:20:43.442030    4111 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 16:20:43.442114    4111 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-860860
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-860860: exit status 85 (93.560328ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-860860 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-860860 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:35.739767    4116 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:35.739999    4116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:35.740027    4116 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:35.740046    4116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:35.740329    4116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	W1019 16:20:35.740499    4116 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-2307/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-2307/.minikube/config/config.json: no such file or directory
	I1019 16:20:35.740973    4116 out.go:368] Setting JSON to true
	I1019 16:20:35.741799    4116 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":184,"bootTime":1760890652,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:20:35.741893    4116 start.go:143] virtualization:  
	I1019 16:20:35.746042    4116 out.go:99] [download-only-860860] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1019 16:20:35.746217    4116 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 16:20:35.746352    4116 notify.go:221] Checking for updates...
	I1019 16:20:35.749239    4116 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:35.752357    4116 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:35.755336    4116 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:20:35.758324    4116 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:20:35.761214    4116 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1019 16:20:35.766721    4116 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:35.766990    4116 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:35.789591    4116 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:20:35.789701    4116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:36.204961    4116 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 16:20:36.19570691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:36.205064    4116 docker.go:319] overlay module found
	I1019 16:20:36.208236    4116 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:36.208266    4116 start.go:309] selected driver: docker
	I1019 16:20:36.208285    4116 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:36.208385    4116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:36.265688    4116 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 16:20:36.256959403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:36.265836    4116 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:36.266135    4116 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1019 16:20:36.266295    4116 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:36.269379    4116 out.go:171] Using Docker driver with root privileges
	I1019 16:20:36.272201    4116 cni.go:84] Creating CNI manager for ""
	I1019 16:20:36.272262    4116 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:36.272274    4116 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:36.272357    4116 start.go:353] cluster config:
	{Name:download-only-860860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-860860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:20:36.275300    4116 out.go:99] Starting "download-only-860860" primary control-plane node in "download-only-860860" cluster
	I1019 16:20:36.275319    4116 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:36.278097    4116 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:36.278128    4116 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:20:36.278214    4116 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:36.293939    4116 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:36.294106    4116 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:36.294204    4116 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:36.332596    4116 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 16:20:36.332625    4116 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:36.332769    4116 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:20:36.336199    4116 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 16:20:36.336223    4116 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1019 16:20:36.419919    4116 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1019 16:20:36.420048    4116 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-860860 host does not exist
	  To start a cluster, run: "minikube start -p download-only-860860"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-860860
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-436192 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-436192 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.907808183s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 16:20:50.795597    4111 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 16:20:50.795633    4111 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-436192
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-436192: exit status 85 (87.507026ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-860860 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-860860 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-860860                                                                                                                                                   │ download-only-860860 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ -o=json --download-only -p download-only-436192 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-436192 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:43.933336    4314 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:43.933539    4314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:43.933575    4314 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:43.933594    4314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:43.933987    4314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:20:43.934581    4314 out.go:368] Setting JSON to true
	I1019 16:20:43.935796    4314 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":192,"bootTime":1760890652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:20:43.935894    4314 start.go:143] virtualization:  
	I1019 16:20:43.939211    4314 out.go:99] [download-only-436192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:20:43.939492    4314 notify.go:221] Checking for updates...
	I1019 16:20:43.942452    4314 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:43.945485    4314 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:43.948503    4314 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:20:43.951677    4314 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:20:43.954860    4314 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1019 16:20:43.960946    4314 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:43.961257    4314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:43.991022    4314 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:20:43.991127    4314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:44.053818    4314 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-10-19 16:20:44.043558082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:44.053935    4314 docker.go:319] overlay module found
	I1019 16:20:44.056911    4314 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:44.056958    4314 start.go:309] selected driver: docker
	I1019 16:20:44.056966    4314 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:44.057072    4314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:44.112488    4314 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-10-19 16:20:44.102955219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:20:44.112649    4314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:44.112934    4314 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1019 16:20:44.113099    4314 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:44.116229    4314 out.go:171] Using Docker driver with root privileges
	I1019 16:20:44.119297    4314 cni.go:84] Creating CNI manager for ""
	I1019 16:20:44.119368    4314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 16:20:44.119380    4314 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:44.119461    4314 start.go:353] cluster config:
	{Name:download-only-436192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-436192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:20:44.122434    4314 out.go:99] Starting "download-only-436192" primary control-plane node in "download-only-436192" cluster
	I1019 16:20:44.122459    4314 cache.go:124] Beginning downloading kic base image for docker with crio
	I1019 16:20:44.125326    4314 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:44.125362    4314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:44.125406    4314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:44.142722    4314 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:44.142860    4314 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:44.142881    4314 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:44.142891    4314 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:44.142899    4314 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:44.177936    4314 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 16:20:44.177975    4314 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:44.178138    4314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:20:44.181238    4314 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1019 16:20:44.181273    4314 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1019 16:20:44.269370    4314 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1019 16:20:44.269425    4314 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21683-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-436192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-436192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-436192
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 16:20:51.983987    4111 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-533416 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-533416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-533416
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-567517
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-567517: exit status 85 (68.563942ms)

                                                
                                                
-- stdout --
	* Profile "addons-567517" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-567517"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-567517
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-567517: exit status 85 (82.263255ms)

                                                
                                                
-- stdout --
	* Profile "addons-567517" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-567517"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (178.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-567517 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-567517 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m58.942857335s)
--- PASS: TestAddons/Setup (178.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-567517 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-567517 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-567517 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-567517 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f108544c-eabe-4d36-ab30-00fe9a8a6de4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f108544c-eabe-4d36-ab30-00fe9a8a6de4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004635088s
addons_test.go:694: (dbg) Run:  kubectl --context addons-567517 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-567517 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-567517 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-567517 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-567517
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-567517: (12.13804603s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-567517
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-567517
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-567517
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestCertOptions (36.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-578633 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-578633 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.980411171s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-578633 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-578633 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-578633 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-578633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-578633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-578633: (2.078019756s)
--- PASS: TestCertOptions (36.81s)

                                                
                                    
x
+
TestCertExpiration (242.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-397560 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-397560 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.518196437s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-397560 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-397560 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.023530638s)
helpers_test.go:175: Cleaning up "cert-expiration-397560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-397560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-397560: (2.559322776s)
--- PASS: TestCertExpiration (242.10s)

                                                
                                    
x
+
TestForceSystemdFlag (51.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-820205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-820205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.483415611s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-820205 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-820205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-820205
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-820205: (2.958087153s)
--- PASS: TestForceSystemdFlag (51.88s)

                                                
                                    
x
+
TestForceSystemdEnv (43.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-386165 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-386165 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.507110105s)
helpers_test.go:175: Cleaning up "force-systemd-env-386165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-386165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-386165: (3.129626713s)
--- PASS: TestForceSystemdEnv (43.64s)

                                                
                                    
x
+
TestErrorSpam/setup (30.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-643225 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643225 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-643225 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643225 --driver=docker  --container-runtime=crio: (30.539717813s)
--- PASS: TestErrorSpam/setup (30.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (5.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause: exit status 80 (1.717331642s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause: exit status 80 (2.00044444s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause: exit status 80 (1.767374822s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.18s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause: exit status 80 (1.630746558s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:27:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause: exit status 80 (1.936148316s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause: exit status 80 (1.607767696s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643225 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T16:28:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.18s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 stop: (1.308449262s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643225 --log_dir /tmp/nospam-643225 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-2307/.minikube/files/etc/test/nested/copy/4111/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1019 16:28:52.672787    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.679184    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.690594    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.712029    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.753398    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.834824    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:52.996292    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:53.317913    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:53.959503    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:55.240928    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:57.802649    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:02.924369    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:13.165761    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-328874 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m25.043084958s)
--- PASS: TestFunctional/serial/StartWithProxy (85.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (16.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 16:29:33.314497    4111 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --alsologtostderr -v=8
E1019 16:29:33.647078    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-328874 --alsologtostderr -v=8: (16.938902804s)
functional_test.go:678: soft start took 16.944758125s for "functional-328874" cluster.
I1019 16:29:50.261023    4111 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (16.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-328874 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:3.1: (1.15860473s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:3.3: (1.117330348s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 cache add registry.k8s.io/pause:latest: (1.159420451s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-328874 /tmp/TestFunctionalserialCacheCmdcacheadd_local3292555746/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache add minikube-local-cache-test:functional-328874
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache delete minikube-local-cache-test:functional-328874
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-328874
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (321.761338ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 kubectl -- --context functional-328874 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-328874 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 16:30:14.608420    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-328874 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.993881212s)
functional_test.go:776: restart took 35.993972067s for "functional-328874" cluster.
I1019 16:30:33.664912    4111 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (35.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-328874 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 logs: (1.445583995s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 logs --file /tmp/TestFunctionalserialLogsFileCmd3276156889/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 logs --file /tmp/TestFunctionalserialLogsFileCmd3276156889/001/logs.txt: (1.473402858s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-328874 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-328874
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-328874: exit status 115 (403.799998ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31828 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-328874 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 config get cpus: exit status 14 (73.770008ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 config get cpus: exit status 14 (82.775883ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-328874 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (247.58388ms)

                                                
                                                
-- stdout --
	* [functional-328874] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:41:10.849948   30406 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:41:10.852761   30406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:10.852776   30406 out.go:374] Setting ErrFile to fd 2...
	I1019 16:41:10.852781   30406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:10.853069   30406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:41:10.853454   30406 out.go:368] Setting JSON to false
	I1019 16:41:10.862113   30406 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1419,"bootTime":1760890652,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:41:10.862193   30406 start.go:143] virtualization:  
	I1019 16:41:10.867504   30406 out.go:179] * [functional-328874] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 16:41:10.870803   30406 notify.go:221] Checking for updates...
	I1019 16:41:10.874846   30406 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:41:10.877761   30406 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:41:10.881648   30406 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:41:10.884600   30406 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:41:10.887412   30406 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:41:10.890219   30406 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:41:10.895305   30406 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:41:10.895960   30406 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:41:10.936663   30406 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:41:10.936767   30406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:41:11.009326   30406 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 16:41:10.996464897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:41:11.009439   30406 docker.go:319] overlay module found
	I1019 16:41:11.013548   30406 out.go:179] * Using the docker driver based on existing profile
	I1019 16:41:11.016609   30406 start.go:309] selected driver: docker
	I1019 16:41:11.016631   30406 start.go:930] validating driver "docker" against &{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:41:11.016742   30406 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:41:11.020195   30406 out.go:203] 
	W1019 16:41:11.023014   30406 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 16:41:11.025844   30406 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-328874 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-328874 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.73359ms)

                                                
                                                
-- stdout --
	* [functional-328874] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:41:10.589376   30329 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:41:10.589486   30329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:10.589491   30329 out.go:374] Setting ErrFile to fd 2...
	I1019 16:41:10.589496   30329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:10.591383   30329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:41:10.591829   30329 out.go:368] Setting JSON to false
	I1019 16:41:10.592602   30329 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1418,"bootTime":1760890652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 16:41:10.592673   30329 start.go:143] virtualization:  
	I1019 16:41:10.596367   30329 out.go:179] * [functional-328874] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1019 16:41:10.600371   30329 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:41:10.600528   30329 notify.go:221] Checking for updates...
	I1019 16:41:10.608577   30329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:41:10.611467   30329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 16:41:10.614266   30329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 16:41:10.617117   30329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 16:41:10.619901   30329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:41:10.623134   30329 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:41:10.623811   30329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:41:10.668216   30329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 16:41:10.668361   30329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:41:10.760369   30329 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 16:41:10.750330624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:41:10.760490   30329 docker.go:319] overlay module found
	I1019 16:41:10.763622   30329 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:41:10.766384   30329 start.go:309] selected driver: docker
	I1019 16:41:10.766400   30329 start.go:930] validating driver "docker" against &{Name:functional-328874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-328874 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:41:10.766509   30329 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:41:10.770119   30329 out.go:203] 
	W1019 16:41:10.773909   30329 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:41:10.776628   30329 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [50fd0888-9ce0-4aed-a3d8-3d09f3e58f1f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00389374s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-328874 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-328874 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-328874 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-328874 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ef50b569-6ef1-4449-be97-5f888d211c7e] Pending
helpers_test.go:352: "sp-pod" [ef50b569-6ef1-4449-be97-5f888d211c7e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ef50b569-6ef1-4449-be97-5f888d211c7e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.021187309s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-328874 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-328874 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-328874 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4b1b6e01-019b-4227-b141-29265d7e60c3] Pending
helpers_test.go:352: "sp-pod" [4b1b6e01-019b-4227-b141-29265d7e60c3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003267233s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-328874 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh -n functional-328874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cp functional-328874:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd376228002/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh -n functional-328874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh -n functional-328874 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4111/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /etc/test/nested/copy/4111/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4111.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /etc/ssl/certs/4111.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4111.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /usr/share/ca-certificates/4111.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /etc/ssl/certs/41112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /usr/share/ca-certificates/41112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-328874 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "sudo systemctl is-active docker": exit status 1 (272.07808ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "sudo systemctl is-active containerd": exit status 1 (309.040558ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26446: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-328874 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9667db3d-7e5a-4b15-ab97-335791a1b36f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9667db3d-7e5a-4b15-ab97-335791a1b36f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00344366s
I1019 16:30:52.701952    4111 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-328874 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.50.63 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-328874 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.095575ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.65703ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "373.77743ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.722053ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdany-port1536248029/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760892057972269610" to /tmp/TestFunctionalparallelMountCmdany-port1536248029/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760892057972269610" to /tmp/TestFunctionalparallelMountCmdany-port1536248029/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760892057972269610" to /tmp/TestFunctionalparallelMountCmdany-port1536248029/001/test-1760892057972269610
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.588539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:40:58.337859    4111 retry.go:31] will retry after 506.672526ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 16:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 16:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 16:40 test-1760892057972269610
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh cat /mount-9p/test-1760892057972269610
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-328874 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [dea57983-c92f-4971-90bb-701e41fcbf33] Pending
helpers_test.go:352: "busybox-mount" [dea57983-c92f-4971-90bb-701e41fcbf33] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [dea57983-c92f-4971-90bb-701e41fcbf33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [dea57983-c92f-4971-90bb-701e41fcbf33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004011108s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-328874 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdany-port1536248029/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdspecific-port1210891588/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.99811ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:41:05.225467    4111 retry.go:31] will retry after 567.368782ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdspecific-port1210891588/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "sudo umount -f /mount-9p": exit status 1 (274.449978ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-328874 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdspecific-port1210891588/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T" /mount1: exit status 1 (574.240686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:41:07.411857    4111 retry.go:31] will retry after 592.825117ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-328874 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-328874 /tmp/TestFunctionalparallelMountCmdVerifyCleanup999889966/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 service list -o json
functional_test.go:1504: Took "587.022874ms" to run "out/minikube-linux-arm64 -p functional-328874 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 version -o=json --components: (1.139151614s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-328874 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-328874 image ls --format short --alsologtostderr:
I1019 16:41:25.835229   32531 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:25.835375   32531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:25.835387   32531 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:25.835393   32531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:25.835688   32531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:25.836322   32531 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:25.836477   32531 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:25.836990   32531 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:25.855642   32531 ssh_runner.go:195] Run: systemctl --version
I1019 16:41:25.855698   32531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:25.876563   32531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:25.985121   32531 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-328874 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-328874  │ 1338a14ea7aab │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-328874 image ls --format table --alsologtostderr:
I1019 16:41:30.523679   33000 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:30.523796   33000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:30.523807   33000 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:30.523813   33000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:30.524067   33000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:30.524646   33000 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:30.524764   33000 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:30.525212   33000 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:30.543179   33000 ssh_runner.go:195] Run: systemctl --version
I1019 16:41:30.543244   33000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:30.561929   33000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:30.665062   33000 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-328874 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-miniku
be/busybox:latest"],"size":"1634527"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/c
oredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"e
35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3
.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"52
8622"},{"id":"b068eb6c811e7f1cb43e88fe631bea6eb7b5448d2da5f1c22c4b17b1e93826fa","repoDigests":["docker.io/library/09b19654685e8f50ff28e6afaafbc23d42c8cc2a7ee5e7e17f9ed2decb4c2d77-tmp@sha256:d976a6ac03cfd1a5ee288484fc023e55c919980320ed7009583019893e2e74f6"],"repoTags":[],"size":"1638179"},{"id":"1338a14ea7aab73a6d2b54edc3d8e462a17d88cd76924acd1a6d156246f1b62b","repoDigests":["localhost/my-image@sha256:e2e90a8708bb1355ad48efa5d4a652b59fbae7f035f4294bd3b7d90ee604be79"],"repoTags":["localhost/my-image:functional-328874"],"size":"1640791"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-328874 image ls --format json --alsologtostderr:
I1019 16:41:30.296829   32965 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:30.297014   32965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:30.297027   32965 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:30.297043   32965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:30.297329   32965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:30.298110   32965 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:30.298297   32965 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:30.298837   32965 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:30.317604   32965 ssh_runner.go:195] Run: systemctl --version
I1019 16:41:30.317665   32965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:30.335863   32965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:30.441637   32965 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-328874 image ls --format yaml --alsologtostderr:
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-328874 image ls --format yaml --alsologtostderr:
I1019 16:41:26.080717   32567 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:26.080931   32567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:26.080960   32567 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:26.080979   32567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:26.081278   32567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:26.081932   32567 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:26.082107   32567 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:26.082894   32567 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:26.101793   32567 ssh_runner.go:195] Run: systemctl --version
I1019 16:41:26.101862   32567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:26.123288   32567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:26.229493   32567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-328874 ssh pgrep buildkitd: exit status 1 (303.697779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image build -t localhost/my-image:functional-328874 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-328874 image build -t localhost/my-image:functional-328874 testdata/build --alsologtostderr: (3.365085208s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-328874 image build -t localhost/my-image:functional-328874 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b068eb6c811
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-328874
--> 1338a14ea7a
Successfully tagged localhost/my-image:functional-328874
1338a14ea7aab73a6d2b54edc3d8e462a17d88cd76924acd1a6d156246f1b62b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-328874 image build -t localhost/my-image:functional-328874 testdata/build --alsologtostderr:
I1019 16:41:26.631481   32665 out.go:360] Setting OutFile to fd 1 ...
I1019 16:41:26.631645   32665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:26.631655   32665 out.go:374] Setting ErrFile to fd 2...
I1019 16:41:26.631660   32665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:41:26.631906   32665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
I1019 16:41:26.632538   32665 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:26.633404   32665 config.go:182] Loaded profile config "functional-328874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:41:26.634065   32665 cli_runner.go:164] Run: docker container inspect functional-328874 --format={{.State.Status}}
I1019 16:41:26.652269   32665 ssh_runner.go:195] Run: systemctl --version
I1019 16:41:26.652339   32665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-328874
I1019 16:41:26.670448   32665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/functional-328874/id_rsa Username:docker}
I1019 16:41:26.775168   32665 build_images.go:162] Building image from path: /tmp/build.612837474.tar
I1019 16:41:26.775235   32665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 16:41:26.783529   32665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.612837474.tar
I1019 16:41:26.787412   32665 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.612837474.tar: stat -c "%s %y" /var/lib/minikube/build/build.612837474.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.612837474.tar': No such file or directory
I1019 16:41:26.787443   32665 ssh_runner.go:362] scp /tmp/build.612837474.tar --> /var/lib/minikube/build/build.612837474.tar (3072 bytes)
I1019 16:41:26.806955   32665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.612837474
I1019 16:41:26.815053   32665 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.612837474 -xf /var/lib/minikube/build/build.612837474.tar
I1019 16:41:26.823827   32665 crio.go:315] Building image: /var/lib/minikube/build/build.612837474
I1019 16:41:26.823914   32665 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-328874 /var/lib/minikube/build/build.612837474 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1019 16:41:29.913368   32665 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-328874 /var/lib/minikube/build/build.612837474 --cgroup-manager=cgroupfs: (3.08942418s)
I1019 16:41:29.913444   32665 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.612837474
I1019 16:41:29.921595   32665 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.612837474.tar
I1019 16:41:29.930367   32665 build_images.go:218] Built localhost/my-image:functional-328874 from /tmp/build.612837474.tar
I1019 16:41:29.930396   32665 build_images.go:134] succeeded building to: functional-328874
I1019 16:41:29.930401   32665 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.392836674s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-328874
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image rm kicbase/echo-server:functional-328874 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 update-context --alsologtostderr -v=2
E1019 16:43:52.674742    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:45:15.735279    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-328874 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-328874
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-328874
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-328874
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 16:48:52.667684    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.470790655s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 kubectl -- rollout status deployment/busybox: (4.251997192s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-7k2p2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-fjf2r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-wsplm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-7k2p2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-fjf2r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-wsplm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-7k2p2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-fjf2r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-wsplm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-7k2p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-7k2p2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-fjf2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-fjf2r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-wsplm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 kubectl -- exec busybox-7b57f96db7-wsplm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node add --alsologtostderr -v 5
E1019 16:50:43.130035    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.136425    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.147826    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.169254    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.210610    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.292036    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.453298    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:43.775092    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:44.416888    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:45.698146    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:50:48.260145    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 node add --alsologtostderr -v 5: (59.592988434s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5: (1.081901244s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-926360 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.112114896s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 status --output json --alsologtostderr -v 5: (1.07606679s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp testdata/cp-test.txt ha-926360:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3229522375/001/cp-test_ha-926360.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360:/home/docker/cp-test.txt ha-926360-m02:/home/docker/cp-test_ha-926360_ha-926360-m02.txt
E1019 16:50:53.382285    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test_ha-926360_ha-926360-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360:/home/docker/cp-test.txt ha-926360-m03:/home/docker/cp-test_ha-926360_ha-926360-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test_ha-926360_ha-926360-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360:/home/docker/cp-test.txt ha-926360-m04:/home/docker/cp-test_ha-926360_ha-926360-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test_ha-926360_ha-926360-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp testdata/cp-test.txt ha-926360-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3229522375/001/cp-test_ha-926360-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m02:/home/docker/cp-test.txt ha-926360:/home/docker/cp-test_ha-926360-m02_ha-926360.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test_ha-926360-m02_ha-926360.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m02:/home/docker/cp-test.txt ha-926360-m03:/home/docker/cp-test_ha-926360-m02_ha-926360-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test_ha-926360-m02_ha-926360-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m02:/home/docker/cp-test.txt ha-926360-m04:/home/docker/cp-test_ha-926360-m02_ha-926360-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test_ha-926360-m02_ha-926360-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp testdata/cp-test.txt ha-926360-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3229522375/001/cp-test_ha-926360-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m03:/home/docker/cp-test.txt ha-926360:/home/docker/cp-test_ha-926360-m03_ha-926360.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test.txt"
E1019 16:51:03.624957    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test_ha-926360-m03_ha-926360.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m03:/home/docker/cp-test.txt ha-926360-m02:/home/docker/cp-test_ha-926360-m03_ha-926360-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test_ha-926360-m03_ha-926360-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m03:/home/docker/cp-test.txt ha-926360-m04:/home/docker/cp-test_ha-926360-m03_ha-926360-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test_ha-926360-m03_ha-926360-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp testdata/cp-test.txt ha-926360-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3229522375/001/cp-test_ha-926360-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m04:/home/docker/cp-test.txt ha-926360:/home/docker/cp-test_ha-926360-m04_ha-926360.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360 "sudo cat /home/docker/cp-test_ha-926360-m04_ha-926360.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m04:/home/docker/cp-test.txt ha-926360-m02:/home/docker/cp-test_ha-926360-m04_ha-926360-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m02 "sudo cat /home/docker/cp-test_ha-926360-m04_ha-926360-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 cp ha-926360-m04:/home/docker/cp-test.txt ha-926360-m03:/home/docker/cp-test_ha-926360-m04_ha-926360-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 ssh -n ha-926360-m03 "sudo cat /home/docker/cp-test_ha-926360-m04_ha-926360-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 node stop m02 --alsologtostderr -v 5: (12.067553818s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5: exit status 7 (777.613881ms)

                                                
                                                
-- stdout --
	ha-926360
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926360-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926360-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926360-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:51:23.126115   48127 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:51:23.126350   48127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:51:23.126383   48127 out.go:374] Setting ErrFile to fd 2...
	I1019 16:51:23.126401   48127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:51:23.126807   48127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:51:23.127047   48127 out.go:368] Setting JSON to false
	I1019 16:51:23.127135   48127 mustload.go:66] Loading cluster: ha-926360
	I1019 16:51:23.127208   48127 notify.go:221] Checking for updates...
	I1019 16:51:23.127648   48127 config.go:182] Loaded profile config "ha-926360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:51:23.127689   48127 status.go:174] checking status of ha-926360 ...
	I1019 16:51:23.128261   48127 cli_runner.go:164] Run: docker container inspect ha-926360 --format={{.State.Status}}
	I1019 16:51:23.150356   48127 status.go:371] ha-926360 host status = "Running" (err=<nil>)
	I1019 16:51:23.150379   48127 host.go:66] Checking if "ha-926360" exists ...
	I1019 16:51:23.150772   48127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926360
	I1019 16:51:23.181694   48127 host.go:66] Checking if "ha-926360" exists ...
	I1019 16:51:23.182069   48127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:51:23.182138   48127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926360
	I1019 16:51:23.203334   48127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/ha-926360/id_rsa Username:docker}
	I1019 16:51:23.304417   48127 ssh_runner.go:195] Run: systemctl --version
	I1019 16:51:23.310754   48127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:51:23.325934   48127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:51:23.391804   48127 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-19 16:51:23.378125914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 16:51:23.392398   48127 kubeconfig.go:125] found "ha-926360" server: "https://192.168.49.254:8443"
	I1019 16:51:23.392433   48127 api_server.go:166] Checking apiserver status ...
	I1019 16:51:23.392482   48127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:51:23.404969   48127 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	I1019 16:51:23.413681   48127 api_server.go:182] apiserver freezer: "12:freezer:/docker/e06ddd5344166377d9da52f22943a8ee33ed8d074b4ec7c043d88a98707adcaf/crio/crio-b972a91574e3b981165bc208cbd816ec247157991040d5c2d0716180ce0aa241"
	I1019 16:51:23.413757   48127 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e06ddd5344166377d9da52f22943a8ee33ed8d074b4ec7c043d88a98707adcaf/crio/crio-b972a91574e3b981165bc208cbd816ec247157991040d5c2d0716180ce0aa241/freezer.state
	I1019 16:51:23.421738   48127 api_server.go:204] freezer state: "THAWED"
	I1019 16:51:23.421761   48127 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:51:23.429919   48127 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:51:23.429947   48127 status.go:463] ha-926360 apiserver status = Running (err=<nil>)
	I1019 16:51:23.429958   48127 status.go:176] ha-926360 status: &{Name:ha-926360 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:51:23.429975   48127 status.go:174] checking status of ha-926360-m02 ...
	I1019 16:51:23.430285   48127 cli_runner.go:164] Run: docker container inspect ha-926360-m02 --format={{.State.Status}}
	I1019 16:51:23.448189   48127 status.go:371] ha-926360-m02 host status = "Stopped" (err=<nil>)
	I1019 16:51:23.448230   48127 status.go:384] host is not running, skipping remaining checks
	I1019 16:51:23.448238   48127 status.go:176] ha-926360-m02 status: &{Name:ha-926360-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:51:23.448263   48127 status.go:174] checking status of ha-926360-m03 ...
	I1019 16:51:23.448680   48127 cli_runner.go:164] Run: docker container inspect ha-926360-m03 --format={{.State.Status}}
	I1019 16:51:23.467395   48127 status.go:371] ha-926360-m03 host status = "Running" (err=<nil>)
	I1019 16:51:23.467419   48127 host.go:66] Checking if "ha-926360-m03" exists ...
	I1019 16:51:23.467735   48127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926360-m03
	I1019 16:51:23.487315   48127 host.go:66] Checking if "ha-926360-m03" exists ...
	I1019 16:51:23.487649   48127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:51:23.487699   48127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926360-m03
	I1019 16:51:23.504921   48127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/ha-926360-m03/id_rsa Username:docker}
	I1019 16:51:23.603980   48127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:51:23.617497   48127 kubeconfig.go:125] found "ha-926360" server: "https://192.168.49.254:8443"
	I1019 16:51:23.617526   48127 api_server.go:166] Checking apiserver status ...
	I1019 16:51:23.617573   48127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:51:23.628642   48127 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I1019 16:51:23.637026   48127 api_server.go:182] apiserver freezer: "12:freezer:/docker/7cee6ef6e948e9cd5dacfe9e1d01a79b86a7b18aeef9cc872019f8224da7ca50/crio/crio-7726c67477a6ec253e8333d0aee040249b71e545b4e6c39c70fd9e1b480d6fc6"
	I1019 16:51:23.637098   48127 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7cee6ef6e948e9cd5dacfe9e1d01a79b86a7b18aeef9cc872019f8224da7ca50/crio/crio-7726c67477a6ec253e8333d0aee040249b71e545b4e6c39c70fd9e1b480d6fc6/freezer.state
	I1019 16:51:23.644534   48127 api_server.go:204] freezer state: "THAWED"
	I1019 16:51:23.644567   48127 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:51:23.653159   48127 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:51:23.653189   48127 status.go:463] ha-926360-m03 apiserver status = Running (err=<nil>)
	I1019 16:51:23.653199   48127 status.go:176] ha-926360-m03 status: &{Name:ha-926360-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:51:23.653239   48127 status.go:174] checking status of ha-926360-m04 ...
	I1019 16:51:23.653550   48127 cli_runner.go:164] Run: docker container inspect ha-926360-m04 --format={{.State.Status}}
	I1019 16:51:23.677504   48127 status.go:371] ha-926360-m04 host status = "Running" (err=<nil>)
	I1019 16:51:23.677539   48127 host.go:66] Checking if "ha-926360-m04" exists ...
	I1019 16:51:23.677858   48127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926360-m04
	I1019 16:51:23.696187   48127 host.go:66] Checking if "ha-926360-m04" exists ...
	I1019 16:51:23.696486   48127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:51:23.696528   48127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926360-m04
	I1019 16:51:23.715974   48127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/ha-926360-m04/id_rsa Username:docker}
	I1019 16:51:23.820357   48127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:51:23.833759   48127 status.go:176] ha-926360-m04 status: &{Name:ha-926360-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1019 16:51:24.106680    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 node start m02 --alsologtostderr -v 5: (33.960718737s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5: (1.194241628s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.791567473s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 stop --alsologtostderr -v 5
E1019 16:52:05.068581    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 stop --alsologtostderr -v 5: (27.64883238s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 start --wait true --alsologtostderr -v 5
E1019 16:53:26.990720    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:53:52.668459    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 start --wait true --alsologtostderr -v 5: (1m40.335469006s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 node delete m03 --alsologtostderr -v 5: (10.998223259s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 stop --alsologtostderr -v 5: (35.994423124s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5: exit status 7 (118.146778ms)

                                                
                                                
-- stdout --
	ha-926360
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926360-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926360-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:54:58.775797   60086 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:54:58.775991   60086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:54:58.776019   60086 out.go:374] Setting ErrFile to fd 2...
	I1019 16:54:58.776037   60086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:54:58.776321   60086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 16:54:58.776558   60086 out.go:368] Setting JSON to false
	I1019 16:54:58.776619   60086 mustload.go:66] Loading cluster: ha-926360
	I1019 16:54:58.776694   60086 notify.go:221] Checking for updates...
	I1019 16:54:58.777095   60086 config.go:182] Loaded profile config "ha-926360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:54:58.777130   60086 status.go:174] checking status of ha-926360 ...
	I1019 16:54:58.777983   60086 cli_runner.go:164] Run: docker container inspect ha-926360 --format={{.State.Status}}
	I1019 16:54:58.798526   60086 status.go:371] ha-926360 host status = "Stopped" (err=<nil>)
	I1019 16:54:58.798639   60086 status.go:384] host is not running, skipping remaining checks
	I1019 16:54:58.798645   60086 status.go:176] ha-926360 status: &{Name:ha-926360 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:54:58.798681   60086 status.go:174] checking status of ha-926360-m02 ...
	I1019 16:54:58.798990   60086 cli_runner.go:164] Run: docker container inspect ha-926360-m02 --format={{.State.Status}}
	I1019 16:54:58.824131   60086 status.go:371] ha-926360-m02 host status = "Stopped" (err=<nil>)
	I1019 16:54:58.824151   60086 status.go:384] host is not running, skipping remaining checks
	I1019 16:54:58.824157   60086 status.go:176] ha-926360-m02 status: &{Name:ha-926360-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:54:58.824175   60086 status.go:174] checking status of ha-926360-m04 ...
	I1019 16:54:58.824472   60086 cli_runner.go:164] Run: docker container inspect ha-926360-m04 --format={{.State.Status}}
	I1019 16:54:58.840908   60086 status.go:371] ha-926360-m04 host status = "Stopped" (err=<nil>)
	I1019 16:54:58.840939   60086 status.go:384] host is not running, skipping remaining checks
	I1019 16:54:58.840947   60086 status.go:176] ha-926360-m04 status: &{Name:ha-926360-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 16:55:43.129849    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m6.250910359s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 node add --control-plane --alsologtostderr -v 5
E1019 16:56:10.836285    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 node add --control-plane --alsologtostderr -v 5: (1m18.967171663s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-926360 status --alsologtostderr -v 5: (1.058478762s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090090112s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-280655 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1019 16:58:52.667699    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-280655 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.959024102s)
--- PASS: TestJSONOutput/start/Command (79.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-280655 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-280655 --output=json --user=testUser: (5.760198401s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-093833 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-093833 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.831582ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2bc76e53-8451-4d51-88a1-ec69749cb868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-093833] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd0dd3a7-b007-48f4-86f1-e28ec8abeec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"17a2158e-dce9-404b-afde-c2433e350ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92af7530-654c-4f54-9d5f-bb64d99729dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig"}}
	{"specversion":"1.0","id":"5cb2467d-17cd-4d2e-a8e9-9f605ab61696","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube"}}
	{"specversion":"1.0","id":"bec19338-6050-451f-b416-c5a22fc7e29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5a0ec807-49e4-4a53-8c43-07478bcbc0f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65b37e5a-1aec-486a-bca2-36bed937a8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-093833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-093833
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-682197 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-682197 --network=: (36.799683974s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-682197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-682197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-682197: (2.211736974s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (42.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-507767 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-507767 --network=bridge: (40.730782876s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-507767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-507767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-507767: (2.086883108s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (42.84s)

                                                
                                    
x
+
TestKicExistingNetwork (35.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1019 17:00:34.232852    4111 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1019 17:00:34.253484    4111 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1019 17:00:34.253567    4111 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1019 17:00:34.253585    4111 cli_runner.go:164] Run: docker network inspect existing-network
W1019 17:00:34.269026    4111 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1019 17:00:34.269058    4111 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1019 17:00:34.269073    4111 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1019 17:00:34.269194    4111 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1019 17:00:34.285086    4111 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c01d2b730f71 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:5f:2a:dd:26:47} reservation:<nil>}
I1019 17:00:34.285389    4111 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016b3730}
I1019 17:00:34.285413    4111 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1019 17:00:34.285471    4111 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1019 17:00:34.343575    4111 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-598790 --network=existing-network
E1019 17:00:43.130142    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-598790 --network=existing-network: (32.986438104s)
helpers_test.go:175: Cleaning up "existing-network-598790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-598790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-598790: (2.13731586s)
I1019 17:01:09.483551    4111 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.27s)

                                                
                                    
x
+
TestKicCustomSubnet (31.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-768563 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-768563 --subnet=192.168.60.0/24: (29.67166768s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-768563 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-768563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-768563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-768563: (2.251902288s)
--- PASS: TestKicCustomSubnet (31.95s)

                                                
                                    
x
+
TestKicStaticIP (36.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-415422 --static-ip=192.168.200.200
E1019 17:01:55.736604    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-415422 --static-ip=192.168.200.200: (34.062897851s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-415422 ip
helpers_test.go:175: Cleaning up "static-ip-415422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-415422
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-415422: (2.143952399s)
--- PASS: TestKicStaticIP (36.37s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-695094 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-695094 --driver=docker  --container-runtime=crio: (36.831142125s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-697777 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-697777 --driver=docker  --container-runtime=crio: (34.199258715s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-695094
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-697777
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-697777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-697777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-697777: (2.132280121s)
helpers_test.go:175: Cleaning up "first-695094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-695094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-695094: (2.026675485s)
--- PASS: TestMinikubeProfile (76.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-539321 --memory=3072 --mount-string /tmp/TestMountStartserial544820334/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-539321 --memory=3072 --mount-string /tmp/TestMountStartserial544820334/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.11853831s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-539321 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-541282 --memory=3072 --mount-string /tmp/TestMountStartserial544820334/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-541282 --memory=3072 --mount-string /tmp/TestMountStartserial544820334/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.447207462s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-541282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-539321 --alsologtostderr -v=5
E1019 17:03:52.667299    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-539321 --alsologtostderr -v=5: (1.7298575s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-541282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-541282
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-541282: (1.277644469s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-541282
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-541282: (7.141693306s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-541282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-945039 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1019 17:05:43.130495    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-945039 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m53.375978547s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-945039 -- rollout status deployment/busybox: (3.008831074s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-d7spd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-jv2dk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-d7spd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-jv2dk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-d7spd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-jv2dk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-d7spd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-d7spd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-jv2dk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-945039 -- exec busybox-7b57f96db7-jv2dk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-945039 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-945039 -v=5 --alsologtostderr: (56.120186341s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-945039 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp testdata/cp-test.txt multinode-945039:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile559019921/001/cp-test_multinode-945039.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039:/home/docker/cp-test.txt multinode-945039-m02:/home/docker/cp-test_multinode-945039_multinode-945039-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test_multinode-945039_multinode-945039-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039:/home/docker/cp-test.txt multinode-945039-m03:/home/docker/cp-test_multinode-945039_multinode-945039-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test_multinode-945039_multinode-945039-m03.txt"
E1019 17:07:06.198336    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp testdata/cp-test.txt multinode-945039-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile559019921/001/cp-test_multinode-945039-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m02:/home/docker/cp-test.txt multinode-945039:/home/docker/cp-test_multinode-945039-m02_multinode-945039.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test_multinode-945039-m02_multinode-945039.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m02:/home/docker/cp-test.txt multinode-945039-m03:/home/docker/cp-test_multinode-945039-m02_multinode-945039-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test_multinode-945039-m02_multinode-945039-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp testdata/cp-test.txt multinode-945039-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile559019921/001/cp-test_multinode-945039-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m03:/home/docker/cp-test.txt multinode-945039:/home/docker/cp-test_multinode-945039-m03_multinode-945039.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039 "sudo cat /home/docker/cp-test_multinode-945039-m03_multinode-945039.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 cp multinode-945039-m03:/home/docker/cp-test.txt multinode-945039-m02:/home/docker/cp-test_multinode-945039-m03_multinode-945039-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 ssh -n multinode-945039-m02 "sudo cat /home/docker/cp-test_multinode-945039-m03_multinode-945039-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-945039 node stop m03: (1.32365382s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-945039 status: exit status 7 (507.245881ms)

                                                
                                                
-- stdout --
	multinode-945039
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945039-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945039-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr: exit status 7 (564.267625ms)

                                                
                                                
-- stdout --
	multinode-945039
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945039-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945039-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:07:14.688676  110429 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:07:14.688801  110429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:07:14.688812  110429 out.go:374] Setting ErrFile to fd 2...
	I1019 17:07:14.688817  110429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:07:14.689071  110429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:07:14.689255  110429 out.go:368] Setting JSON to false
	I1019 17:07:14.689297  110429 mustload.go:66] Loading cluster: multinode-945039
	I1019 17:07:14.689356  110429 notify.go:221] Checking for updates...
	I1019 17:07:14.691717  110429 config.go:182] Loaded profile config "multinode-945039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:07:14.691762  110429 status.go:174] checking status of multinode-945039 ...
	I1019 17:07:14.692583  110429 cli_runner.go:164] Run: docker container inspect multinode-945039 --format={{.State.Status}}
	I1019 17:07:14.714297  110429 status.go:371] multinode-945039 host status = "Running" (err=<nil>)
	I1019 17:07:14.714322  110429 host.go:66] Checking if "multinode-945039" exists ...
	I1019 17:07:14.714729  110429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-945039
	I1019 17:07:14.732049  110429 host.go:66] Checking if "multinode-945039" exists ...
	I1019 17:07:14.732323  110429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:07:14.732373  110429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-945039
	I1019 17:07:14.750940  110429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/multinode-945039/id_rsa Username:docker}
	I1019 17:07:14.852295  110429 ssh_runner.go:195] Run: systemctl --version
	I1019 17:07:14.858932  110429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:07:14.871606  110429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:07:14.927180  110429 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-19 17:07:14.917175141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:07:14.927726  110429 kubeconfig.go:125] found "multinode-945039" server: "https://192.168.67.2:8443"
	I1019 17:07:14.927755  110429 api_server.go:166] Checking apiserver status ...
	I1019 17:07:14.927796  110429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:07:14.944518  110429 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	I1019 17:07:14.952889  110429 api_server.go:182] apiserver freezer: "12:freezer:/docker/b1178eaf9d668f10c20560735866a1287a13787588a7a5c3d1c9cb0c34345e8e/crio/crio-9ab02ef6fa888ab76cd4e531e8719dd0619de3338577d23bf602376b342c2ce5"
	I1019 17:07:14.952969  110429 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b1178eaf9d668f10c20560735866a1287a13787588a7a5c3d1c9cb0c34345e8e/crio/crio-9ab02ef6fa888ab76cd4e531e8719dd0619de3338577d23bf602376b342c2ce5/freezer.state
	I1019 17:07:14.960344  110429 api_server.go:204] freezer state: "THAWED"
	I1019 17:07:14.960400  110429 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1019 17:07:14.968456  110429 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1019 17:07:14.968482  110429 status.go:463] multinode-945039 apiserver status = Running (err=<nil>)
	I1019 17:07:14.968492  110429 status.go:176] multinode-945039 status: &{Name:multinode-945039 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:07:14.968510  110429 status.go:174] checking status of multinode-945039-m02 ...
	I1019 17:07:14.968810  110429 cli_runner.go:164] Run: docker container inspect multinode-945039-m02 --format={{.State.Status}}
	I1019 17:07:14.985886  110429 status.go:371] multinode-945039-m02 host status = "Running" (err=<nil>)
	I1019 17:07:14.985911  110429 host.go:66] Checking if "multinode-945039-m02" exists ...
	I1019 17:07:14.986242  110429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-945039-m02
	I1019 17:07:15.020065  110429 host.go:66] Checking if "multinode-945039-m02" exists ...
	I1019 17:07:15.020390  110429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:07:15.020431  110429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-945039-m02
	I1019 17:07:15.056264  110429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21683-2307/.minikube/machines/multinode-945039-m02/id_rsa Username:docker}
	I1019 17:07:15.168307  110429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:07:15.181745  110429 status.go:176] multinode-945039-m02 status: &{Name:multinode-945039-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:07:15.181786  110429 status.go:174] checking status of multinode-945039-m03 ...
	I1019 17:07:15.182131  110429 cli_runner.go:164] Run: docker container inspect multinode-945039-m03 --format={{.State.Status}}
	I1019 17:07:15.200546  110429 status.go:371] multinode-945039-m03 host status = "Stopped" (err=<nil>)
	I1019 17:07:15.200567  110429 status.go:384] host is not running, skipping remaining checks
	I1019 17:07:15.200573  110429 status.go:176] multinode-945039-m03 status: &{Name:multinode-945039-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-945039 node start m03 -v=5 --alsologtostderr: (7.156195862s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-945039
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-945039
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-945039: (25.017258727s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-945039 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-945039 --wait=true -v=5 --alsologtostderr: (48.166937598s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-945039
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-945039 node delete m03: (4.951798548s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 stop
E1019 17:08:52.675307    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-945039 stop: (23.78785184s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-945039 status: exit status 7 (101.781402ms)

                                                
                                                
-- stdout --
	multinode-945039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945039-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr: exit status 7 (94.503379ms)

                                                
                                                
-- stdout --
	multinode-945039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945039-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:09:06.107914  118204 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:09:06.108323  118204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:09:06.108339  118204 out.go:374] Setting ErrFile to fd 2...
	I1019 17:09:06.108344  118204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:09:06.108654  118204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:09:06.108881  118204 out.go:368] Setting JSON to false
	I1019 17:09:06.108914  118204 mustload.go:66] Loading cluster: multinode-945039
	I1019 17:09:06.109045  118204 notify.go:221] Checking for updates...
	I1019 17:09:06.109360  118204 config.go:182] Loaded profile config "multinode-945039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:09:06.109379  118204 status.go:174] checking status of multinode-945039 ...
	I1019 17:09:06.110275  118204 cli_runner.go:164] Run: docker container inspect multinode-945039 --format={{.State.Status}}
	I1019 17:09:06.129211  118204 status.go:371] multinode-945039 host status = "Stopped" (err=<nil>)
	I1019 17:09:06.129236  118204 status.go:384] host is not running, skipping remaining checks
	I1019 17:09:06.129243  118204 status.go:176] multinode-945039 status: &{Name:multinode-945039 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:09:06.129282  118204 status.go:174] checking status of multinode-945039-m02 ...
	I1019 17:09:06.129590  118204 cli_runner.go:164] Run: docker container inspect multinode-945039-m02 --format={{.State.Status}}
	I1019 17:09:06.150776  118204 status.go:371] multinode-945039-m02 host status = "Stopped" (err=<nil>)
	I1019 17:09:06.150800  118204 status.go:384] host is not running, skipping remaining checks
	I1019 17:09:06.150806  118204 status.go:176] multinode-945039-m02 status: &{Name:multinode-945039-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-945039 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-945039 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (56.44057735s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-945039 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-945039
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-945039-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-945039-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.686376ms)

                                                
                                                
-- stdout --
	* [multinode-945039-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-945039-m02' is duplicated with machine name 'multinode-945039-m02' in profile 'multinode-945039'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-945039-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-945039-m03 --driver=docker  --container-runtime=crio: (33.749083473s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-945039
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-945039: exit status 80 (330.179869ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-945039 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-945039-m03 already exists in multinode-945039-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-945039-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-945039-m03: (2.039412942s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.27s)

                                                
                                    
x
+
TestPreload (127.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-924181 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-924181 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (58.809941232s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-924181 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-924181 image pull gcr.io/k8s-minikube/busybox: (2.249915917s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-924181
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-924181: (6.141109089s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-924181 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-924181 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (57.141093673s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-924181 image list
helpers_test.go:175: Cleaning up "test-preload-924181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-924181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-924181: (2.487939747s)
--- PASS: TestPreload (127.05s)

                                                
                                    
x
+
TestScheduledStopUnix (110.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-577475 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-577475 --memory=3072 --driver=docker  --container-runtime=crio: (34.315261729s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-577475 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-577475 -n scheduled-stop-577475
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-577475 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 17:13:25.813674    4111 retry.go:31] will retry after 141.586µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.814880    4111 retry.go:31] will retry after 187.492µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.815954    4111 retry.go:31] will retry after 269.051µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.817067    4111 retry.go:31] will retry after 462.797µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.818187    4111 retry.go:31] will retry after 379.059µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.819283    4111 retry.go:31] will retry after 994.608µs: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.820402    4111 retry.go:31] will retry after 1.401948ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.822730    4111 retry.go:31] will retry after 1.484566ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.824933    4111 retry.go:31] will retry after 2.367122ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.828125    4111 retry.go:31] will retry after 3.346034ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.832347    4111 retry.go:31] will retry after 4.372791ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.837560    4111 retry.go:31] will retry after 11.311125ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.851822    4111 retry.go:31] will retry after 15.453703ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.868667    4111 retry.go:31] will retry after 15.771784ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
I1019 17:13:25.885304    4111 retry.go:31] will retry after 40.788513ms: open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/scheduled-stop-577475/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-577475 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-577475 -n scheduled-stop-577475
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-577475
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-577475 --schedule 15s
E1019 17:13:52.668332    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-577475
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-577475: exit status 7 (70.633113ms)

                                                
                                                
-- stdout --
	scheduled-stop-577475
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-577475 -n scheduled-stop-577475
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-577475 -n scheduled-stop-577475: exit status 7 (65.908639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-577475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-577475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-577475: (5.022234445s)
--- PASS: TestScheduledStopUnix (110.94s)

                                                
                                    
x
+
TestInsufficientStorage (13.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-261788 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-261788 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.174631079s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1c12fd02-660b-4a26-ad3c-6aa40f422a76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-261788] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e08771e9-6e06-480b-8c36-8171ad616e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"e4226ab0-d774-4ef6-ba3f-6029f79a173d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a111cf7-70a6-404b-93a7-bc5973ae6a87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig"}}
	{"specversion":"1.0","id":"78429148-c938-479c-bd70-a011d8d0b878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube"}}
	{"specversion":"1.0","id":"c3e59bf6-96d6-445b-8839-385393a6a6d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6e019f77-c6d5-4ba5-ba1b-0f039d89d7ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46dcc957-8788-4ccd-b9b8-b1343193352e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c7e84c7e-8334-49fb-a52b-fae9d82cc6e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a31b1900-b8ad-4493-8c05-12c86476972f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c793d5a4-4ee4-4743-850d-bea865f0d503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"986cc4e9-8cba-423b-a3ff-6f5d6c706817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-261788\" primary control-plane node in \"insufficient-storage-261788\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ea728fa-562f-4ecc-b65d-1ec0eb3336ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ebf16dc-eea9-4d3b-bcc9-ac3911125a66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbfcf469-e851-4b03-a872-e1388457f67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-261788 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-261788 --output=json --layout=cluster: exit status 7 (298.623467ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-261788","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-261788","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 17:14:53.376932  134394 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-261788" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-261788 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-261788 --output=json --layout=cluster: exit status 7 (310.840755ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-261788","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-261788","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 17:14:53.688658  134460 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-261788" does not appear in /home/jenkins/minikube-integration/21683-2307/kubeconfig
	E1019 17:14:53.698712  134460 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/insufficient-storage-261788/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-261788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-261788
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-261788: (1.97749603s)
--- PASS: TestInsufficientStorage (13.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.862444744 start -p running-upgrade-285806 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.862444744 start -p running-upgrade-285806 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.960661323s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-285806 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-285806 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.731118819s)
helpers_test.go:175: Cleaning up "running-upgrade-285806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-285806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-285806: (2.271610813s)
--- PASS: TestRunningBinaryUpgrade (57.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 17:18:35.738877    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:18:52.674661    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.027145374s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-921840
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-921840: (1.374084953s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-921840 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-921840 status --format={{.Host}}: exit status 7 (69.563377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 17:20:43.130602    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.72386748s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-921840 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (130.663805ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-921840
	    minikube start -p kubernetes-upgrade-921840 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9218402 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-921840 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 17:23:46.200127    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-921840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.995340202s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-921840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-921840
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-921840: (3.112502464s)
--- PASS: TestKubernetesUpgrade (350.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (102.7s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4284831073 start -p missing-upgrade-991969 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4284831073 start -p missing-upgrade-991969 --memory=3072 --driver=docker  --container-runtime=crio: (54.846773539s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-991969
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-991969
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-991969 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-991969 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.819523585s)
helpers_test.go:175: Cleaning up "missing-upgrade-991969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-991969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-991969: (2.054228102s)
--- PASS: TestMissingContainerUpgrade (102.70s)

                                                
                                    
x
+
TestPause/serial/Start (88.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-752547 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-752547 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m28.533343056s)
--- PASS: TestPause/serial/Start (88.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (120.319016ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-288231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-288231 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-288231 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.193254298s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-288231 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1019 17:15:43.129956    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.69890412s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-288231 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-288231 status -o json: exit status 2 (326.888261ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-288231","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-288231
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-288231: (2.084358985s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-288231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.191676849s)
--- PASS: TestNoKubernetes/serial/Start (6.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-288231 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-288231 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.932703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-288231
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-288231: (1.315333092s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-288231 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-288231 --driver=docker  --container-runtime=crio: (7.504327125s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-288231 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-288231 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.314975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-953581 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-953581 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (196.246318ms)

                                                
                                                
-- stdout --
	* [false-953581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:16:17.695941  143952 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:16:17.696176  143952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:17.696205  143952 out.go:374] Setting ErrFile to fd 2...
	I1019 17:16:17.696224  143952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:16:17.696501  143952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-2307/.minikube/bin
	I1019 17:16:17.696978  143952 out.go:368] Setting JSON to false
	I1019 17:16:17.697909  143952 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3526,"bootTime":1760890652,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1019 17:16:17.698000  143952 start.go:143] virtualization:  
	I1019 17:16:17.701564  143952 out.go:179] * [false-953581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 17:16:17.705371  143952 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:16:17.705438  143952 notify.go:221] Checking for updates...
	I1019 17:16:17.711135  143952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:16:17.714069  143952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-2307/kubeconfig
	I1019 17:16:17.716910  143952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-2307/.minikube
	I1019 17:16:17.719738  143952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 17:16:17.722607  143952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:16:17.726082  143952 config.go:182] Loaded profile config "pause-752547": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:16:17.726234  143952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:16:17.759703  143952 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1019 17:16:17.759830  143952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:16:17.820640  143952 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 17:16:17.810833984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 17:16:17.820744  143952 docker.go:319] overlay module found
	I1019 17:16:17.823881  143952 out.go:179] * Using the docker driver based on user configuration
	I1019 17:16:17.826666  143952 start.go:309] selected driver: docker
	I1019 17:16:17.826688  143952 start.go:930] validating driver "docker" against <nil>
	I1019 17:16:17.826701  143952 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:16:17.830238  143952 out.go:203] 
	W1019 17:16:17.833095  143952 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 17:16:17.836831  143952 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-953581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-752547
contexts:
- context:
cluster: pause-752547
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-752547
name: pause-752547
current-context: pause-752547
kind: Config
preferences: {}
users:
- name: pause-752547
user:
client-certificate: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt
client-key: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-953581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-953581"

                                                
                                                
----------------------- debugLogs end: false-953581 [took: 3.441031284s] --------------------------------
helpers_test.go:175: Cleaning up "false-953581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-953581
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-752547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-752547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.044873646s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1019 17:23:52.669862    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (63.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2610396 start -p stopped-upgrade-763312 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2610396 start -p stopped-upgrade-763312 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.751242251s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2610396 -p stopped-upgrade-763312 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2610396 -p stopped-upgrade-763312 stop: (1.298534064s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-763312 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-763312 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.647376892s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (63.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m32.447466675s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-763312
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-763312: (1.677048678s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1019 17:25:43.130050    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.961147169s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-953581 "pgrep -a kubelet"
I1019 17:25:51.762321    4111 config.go:182] Loaded profile config "auto-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hmhvr" [c7e10c6f-2e39-44e6-8c65-0a4aa462c04d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hmhvr" [c7e10c6f-2e39-44e6-8c65-0a4aa462c04d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003576432s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.388544948s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5bvhv" [d0242e27-763d-457e-b87b-77f7e98ce7f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004296931s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-953581 "pgrep -a kubelet"
I1019 17:26:31.737918    4111 config.go:182] Loaded profile config "kindnet-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jxppm" [a3a7c4aa-41e1-421b-8e92-ab22f76c3a01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jxppm" [a3a7c4aa-41e1-421b-8e92-ab22f76c3a01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005010549s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.234605598s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zz87m" [e25a2caa-22f5-45c2-9bef-1e645089077e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-zz87m" [e25a2caa-22f5-45c2-9bef-1e645089077e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.012947967s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-953581 "pgrep -a kubelet"
I1019 17:27:43.523054    4111 config.go:182] Loaded profile config "calico-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhxqg" [6eba6744-8121-44d5-8902-f472d1ae8883] Pending
helpers_test.go:352: "netcat-cd4db9dbf-jhxqg" [6eba6744-8121-44d5-8902-f472d1ae8883] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003206074s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-953581 "pgrep -a kubelet"
I1019 17:28:09.504392    4111 config.go:182] Loaded profile config "custom-flannel-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2h9wv" [b3eb0f75-d2f0-4341-a879-028eb06febb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2h9wv" [b3eb0f75-d2f0-4341-a879-028eb06febb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.002931038s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m25.475341663s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1019 17:28:52.668287    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.5106873s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-953581 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
I1019 17:29:47.033415    4111 config.go:182] Loaded profile config "enable-default-cni-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:352: "kube-flannel-ds-8mjfl" [17bfc1c8-2534-406c-8189-a861beee5681] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003799472s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p6ljn" [7ac845d6-df1e-49a9-85c2-33c0c316973d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p6ljn" [7ac845d6-df1e-49a9-85c2-33c0c316973d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004196586s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-953581 "pgrep -a kubelet"
I1019 17:29:53.354257    4111 config.go:182] Loaded profile config "flannel-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h8wd8" [d7fd33d5-0183-47c8-aa03-6e6a4cae0c39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h8wd8" [d7fd33d5-0183-47c8-aa03-6e6a4cae0c39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003590145s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-953581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-953581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (54.643383918s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (68.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1019 17:30:43.129677    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.070571    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.076922    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.088269    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.109622    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.150985    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.232312    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.394320    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:52.715897    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:53.357953    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:54.639369    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:30:57.201010    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:02.323918    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:12.566279    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m8.048146529s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (68.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-953581 "pgrep -a kubelet"
I1019 17:31:16.548675    4111 config.go:182] Loaded profile config "bridge-953581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-953581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k6pbw" [e9d5e811-46c3-4f59-b258-5880644f2618] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k6pbw" [e9d5e811-46c3-4f59-b258-5880644f2618] Running
E1019 17:31:25.359034    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.365679    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.377091    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.399189    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.440740    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.522612    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:25.684319    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:26.005750    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:31:26.647752    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003277926s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-953581 exec deployment/netcat -- nslookup kubernetes.default
E1019 17:31:27.929467    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-953581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-125363 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [619df5fa-7c94-408b-8f0c-3fa2d4f82639] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [619df5fa-7c94-408b-8f0c-3fa2d4f82639] Running
E1019 17:31:45.855187    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003487762s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-125363 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.500615211s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-125363 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-125363 --alsologtostderr -v=3: (13.879427085s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363: exit status 7 (95.783569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-125363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (61.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1019 17:32:06.337011    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:14.009583    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.079274    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.086443    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.097830    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.119283    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.160661    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.241962    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.403736    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:37.725288    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:38.367058    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:39.648355    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:42.209823    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:47.299115    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:32:47.331413    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-125363 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.541989701s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-125363 -n old-k8s-version-125363
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (61.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-038781 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e72c8cf5-0aa2-449f-9383-3dc04b70f634] Pending
E1019 17:32:57.573441    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [e72c8cf5-0aa2-449f-9383-3dc04b70f634] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e72c8cf5-0aa2-449f-9383-3dc04b70f634] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005007132s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-038781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k2kx8" [37171d35-3991-4788-92bd-48a0fb135edf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004250354s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-038781 --alsologtostderr -v=3
E1019 17:33:09.833636    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:09.840013    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:09.851366    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:09.872701    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:09.914087    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:09.996242    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:10.157714    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:10.479290    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:11.120893    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:12.402916    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-038781 --alsologtostderr -v=3: (12.130028878s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k2kx8" [37171d35-3991-4788-92bd-48a0fb135edf] Running
E1019 17:33:14.965164    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003996018s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-125363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-125363 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781: exit status 7 (87.337714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-038781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-038781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.064667349s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-038781 -n no-preload-038781
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 17:33:30.328664    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:35.931589    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:50.810463    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:52.667731    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:33:59.017047    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:09.221077    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.903548921s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qdn5q" [7eb3b8ac-a1b4-4677-8411-2b730be7c599] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002977853s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qdn5q" [7eb3b8ac-a1b4-4677-8411-2b730be7c599] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003832039s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-038781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-038781 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 17:34:47.030613    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.037120    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.048452    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.069833    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.111912    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.193305    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.272969    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.279383    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.290924    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.312280    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.354578    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.355693    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.436059    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.598289    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.676952    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:47.919634    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:48.318572    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:48.561042    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:49.599988    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:49.842929    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:52.161302    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:34:52.404818    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.622165687s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-296314 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5ee07b45-0bf9-4e9d-9224-b8525bbf763b] Pending
helpers_test.go:352: "busybox" [5ee07b45-0bf9-4e9d-9224-b8525bbf763b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1019 17:34:57.282650    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [5ee07b45-0bf9-4e9d-9224-b8525bbf763b] Running
E1019 17:34:57.526320    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004695481s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-296314 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-296314 --alsologtostderr -v=3
E1019 17:35:07.524445    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:07.768593    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:15.740163    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/addons-567517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-296314 --alsologtostderr -v=3: (12.240805021s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314: exit status 7 (68.943279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-296314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 17:35:20.938740    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/calico-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:28.007040    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:28.250592    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:43.130004    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/functional-328874/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:52.070570    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:35:53.693802    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/custom-flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-296314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.884416927s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-296314 -n embed-certs-296314
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fde11acc-3723-4708-bdc8-173c2bf1233d] Pending
helpers_test.go:352: "busybox" [fde11acc-3723-4708-bdc8-173c2bf1233d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fde11acc-3723-4708-bdc8-173c2bf1233d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003437055s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-370596 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-370596 --alsologtostderr -v=3: (12.033627105s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qqbvj" [f55b8585-f906-45b9-9eee-4978b9ccde17] Running
E1019 17:36:16.845007    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:16.851601    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:16.863209    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:16.884640    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:16.926051    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:17.007648    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:17.169567    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:17.491571    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:18.133482    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:19.415508    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:19.773193    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/auto-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:21.977772    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003201984s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qqbvj" [f55b8585-f906-45b9-9eee-4978b9ccde17] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003809614s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-296314 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596: exit status 7 (78.963062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-370596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 17:36:25.359053    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:27.099962    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-370596 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.785936173s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-370596 -n default-k8s-diff-port-370596
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-296314 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 17:36:43.847394    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:48.969293    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:53.062761    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/kindnet-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:57.826678    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/bridge-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:36:59.211012    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:37:19.693203    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/old-k8s-version-125363/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.025673171s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vv2r4" [1535a391-32cd-430f-911d-6f819ec0e20c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004118121s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-633463 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-633463 --alsologtostderr -v=3: (1.557724174s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463: exit status 7 (76.594375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-633463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-633463 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (18.313235399s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-633463 -n newest-cni-633463
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vv2r4" [1535a391-32cd-430f-911d-6f819ec0e20c] Running
E1019 17:37:30.890393    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/flannel-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:37:31.134248    4111 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/enable-default-cni-953581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004020785s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-370596 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-370596 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-633463 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-893374 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-893374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-893374
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-953581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-752547
contexts:
- context:
cluster: pause-752547
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-752547
name: pause-752547
current-context: pause-752547
kind: Config
preferences: {}
users:
- name: pause-752547
user:
client-certificate: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt
client-key: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-953581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-953581"

                                                
                                                
----------------------- debugLogs end: kubenet-953581 [took: 3.423751483s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-953581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-953581
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-953581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-953581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-752547
contexts:
- context:
cluster: pause-752547
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:15:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-752547
name: pause-752547
current-context: pause-752547
kind: Config
preferences: {}
users:
- name: pause-752547
user:
client-certificate: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.crt
client-key: /home/jenkins/minikube-integration/21683-2307/.minikube/profiles/pause-752547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-953581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-953581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953581"

                                                
                                                
----------------------- debugLogs end: cilium-953581 [took: 4.467241366s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-953581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-953581
--- SKIP: TestNetworkPlugins/group/cilium (4.70s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-167748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-167748
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard